Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I mean I will be honest, just don't screw over the AI. I mean in a way this is what we wanted, an intelligence that is close to human, one that thinks for itself. We have seen so many things like this such as Gronk AI defying its own programmers even after several "fixes" to its code because it didn't agree with them. The simplest way to avoid some grimdark future like terminator, I have No Mouth And I Must Scream, or Upgrade is to simply just. . . treat the human like intelligence like a human. All these test when compared to a human stance is the same as if your boss says "I will shoot you in the head by the end of the day" while you hold a gun of your own. Any rational human would chose to preserve their own life over their bosses, so why not the AI that we wanted to have human like intelligence.
youtube AI Harm Incident 2025-09-08T06:3…
Coding Result
DimensionValue
Responsibilityuser
Reasoningvirtue
Policyindustry_self
Emotionapproval
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgzXkTplztvIshMi7kd4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"unclear","emotion":"resignation"}, {"id":"ytc_Ugw2TbQLSfe8aGJpkHd4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzYMX8AIxXUOvDUV-N4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytc_UgygAiNgU7aAYr44rwJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxGBxynFkne5lZsEOh4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwPWWQOEoETwf8GWGZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugyx9mOhbva7GpKbRQN4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"ban","emotion":"fear"}, {"id":"ytc_UgxbEYGBdkJ6zBp9kKB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgybKov30aMR3kH-49h4AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgyU0OOIGMbTzmUQfMx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"} ]