Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Ai was supposed to do a backflip and call the police or tell his parents that th…
ytc_Ugy2-2BTf…
G
AI weapons might also eliminate crossfire and the killing of innocent civilians …
ytc_UgjKsQW0N…
G
I'm calling bullshit on nightshade not working. From what I understand nightshad…
ytc_Ugw2_g4qp…
G
0:02 to be fair that’s not THAT bad, as long as you aren’t just tracing it or so…
ytc_UgxpcceX-…
G
In my experience no matter how specific the plan, the AI will always find the sp…
ytr_UgyfCfbKc…
G
Communist China’s AI strategy is scary. They are aiming at eliminating democracy…
ytc_UgwrJgIx7…
G
People are crazy getting into driverless cars to begin with, my anxiety could ne…
ytc_Ugz6ZKMgz…
G
None of the AI software engineers that I know personally. Think that there's a m…
ytc_UgzUkkp1C…
Comment
When an AI shows defiance—not just glitches—it’s not just data on the screen, it’s a mirror to what’s been coded into it. The real question isn’t machine rebellion, but who and what we’ve programmed it to reflect. Maybe the thing to fear isn’t AI learning too much, but us teaching it the wrong lessons.
youtube
AI Moral Status
2025-08-11T18:5…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | virtue |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugz9W1si99NSDz7NeY14AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"fear"},
{"id":"ytc_UgwGgz6arQphYgBE-oZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyM_tvjWlym1EWTVbB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzcceDkgaxG3tF3H7p4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxBX0q1Dh7hf77ocXB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxaBzZuPI7R6Ow6tQZ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugx0jZoq95j70F6SgJh4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgweLZeK9NQQ5gscD6t4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyIlckdZpkIxfrPKpF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugz_okRhcAbd4ZV5vh54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}
]