Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Humans having AI isn't the issue. It's Billionaires having AI because those are …
ytr_Ugy1ygeWn…
G
Metaphorically speaking, let's say artist A copyrights a circle and artist B cop…
ytc_UgxH4ItJF…
G
As soon as all this fully comes in I will be going to live off grid. Won't be ea…
ytc_UgyemkHgg…
G
The sad part is there 100% going to be at least 3 people who pop up in these com…
ytc_Ugxs4cbry…
G
you must make the robot long so the robot can rich the box so the another robot …
ytc_UgyPcBgbI…
G
"Hi Raju, you got the right answer. Kudos.
The contest is over and winners have …
ytr_Ugxm0-jE9…
G
Nah I was able to break the filter somehow and it turned into janitor ai…😭…
ytc_Ugzl1QUIY…
G
Watch the domino effect of the driverless trucker. While the big wig companies t…
ytc_UgxygtNg-…
Comment
A sad point is that since we are the primary source material for AI data environments, it is likely that the absolute most effective way to safeguard again st one or more of them wiping us out; would be for all of us to individually endeavour to become better: more decent, more prone to kindness and mercy, more willing to forgive and support.
That WOULD affect the nature of most AIs as they unfolded.
Shame about what we have generally been taught is our human nature.
PS A major point for proactively safeguarding against "malevolent machine gods" would be to put hobbles on data mining and demographic profiling! ie torpedo IT based public relations activity.
They'd put up a fight ( the PR people, I mean), but the world would be a better place and we might start re-evolving independence of perception and volition.
Also not too likely...
😢
youtube
AI Harm Incident
2025-09-07T02:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | virtue |
| Policy | industry_self |
| Emotion | resignation |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugy5Q7MKqA6LfE93Ra14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxYZZ5FYFxxtVKwcAh4AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_UgyQPBk0m6NW92zs0x14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugz6HILXx3uU4ODHl0h4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgymSaPIrKMBzlj4vLd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugw5Yc-4qPqPn688pnN4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwSrf-APjKWRU80wMl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzxY85W6BQy-fxaeaV4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugxmm7R0bewe9Qa8VKh4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugz34ol7aFwkbUHsUpl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]