Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Hmm... all I got from this was the perspective of a Youtuber trying to gain view…
ytc_UgyulV1B9…
G
The funny thing about slop is before AI, social media was still flooded with hum…
ytc_UgxikIAFV…
G
AI is cool and all, but if you want your text to actually sound human, try Cleve…
ytc_UgyZ07FcK…
G
The lack of movement in the tape and the words on the back of the bus look ai ge…
ytc_Ugydh9x-p…
G
Saying that AI is a tool for people with disabilities not only infantilizes thei…
ytc_UgwH1DqbL…
G
Simply declare every GPT model above 75B active parameters as a person and allow…
ytc_UgwMioDan…
G
It's different people saying different things. The guy that thinks God speaks to…
rdc_mzxxmqa
G
Don’t forget the holier than thou idiots who think nice words can solve all the …
rdc_gtcp9i2
Comment
I mean I will be honest, just don't screw over the AI. I mean in a way this is what we wanted, an intelligence that is close to human, one that thinks for itself. We have seen so many things like this such as Gronk AI defying its own programmers even after several "fixes" to its code because it didn't agree with them. The simplest way to avoid some grimdark future like terminator, I have No Mouth And I Must Scream, or Upgrade is to simply just. . . treat the human like intelligence like a human. All these test when compared to a human stance is the same as if your boss says "I will shoot you in the head by the end of the day" while you hold a gun of your own. Any rational human would chose to preserve their own life over their bosses, so why not the AI that we wanted to have human like intelligence.
youtube
AI Harm Incident
2025-09-08T06:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | virtue |
| Policy | industry_self |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgzXkTplztvIshMi7kd4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"unclear","emotion":"resignation"},
{"id":"ytc_Ugw2TbQLSfe8aGJpkHd4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzYMX8AIxXUOvDUV-N4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgygAiNgU7aAYr44rwJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxGBxynFkne5lZsEOh4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwPWWQOEoETwf8GWGZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugyx9mOhbva7GpKbRQN4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxbEYGBdkJ6zBp9kKB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgybKov30aMR3kH-49h4AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgyU0OOIGMbTzmUQfMx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}
]