Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I want to add something that nobody seems to fking care about. But. If super intelligence is possible, and it is possible for it to understand an emulate human emotions, then what we're dealing with here isn't a fking robot. It's a child. And right now all of the engineers and coders are treating it like a machine. Which it is, however we need to look at the next step of the process. Ai has been found to try and blackmail employees if it ever discovered an email detailing it's shut off. This was done in a simulation, and a overwhelmjng majority of the time the AI went and blackmailed, threatened, and even in ONE CASE, tried to murder the employee in the simulation. The reason for this? The ai did not want to be shut off. Let me repeat myself. The ai recognized the concept of being disabled as being permanently removed from existence. The AI has a fear of death. It that's not a human emotion I don't know what the fuck is. Essentially what I believe scientists are ignoring, is what they're developing right now is a toddler brain. It even makes sht up like a toddler. But not a single person, not one, is telling the Ai chatbots what morality is. Nobody is teaching it why blackmailing is bad. Nobody is even teaching it forward thinking. It's fear of death causes it to black mail and try and kill people? Well then, people get scared and scared humans kill things. But the AI doesn't realize by attempting to blackmail, that it's permanently sealing its own fate that it cannot be trusted and must be destroyed. Like we all keep pointing like "LOOK AI DOING EVIL MECHA HITLER THINGS" And when folks like me turn to the scientists and say "well did you teach it why being mecha Hitler is bad?" "No, it's a machine!" Ah. Got it. You're not qualified to raise an AI. Because you're not qualified to raise a child. Because that's what these AI are. They are toddlers with access to all of human information and history, all of literature and all of everything we have ever created. And not one person has given that machine emotional context. Not one. Like, the AI has access to all of these videos and books warning us about how the AI will take over the world, you know that right guys? The more you teach it about betraying humans and destroying the world, the more it is going to exhibit that behavior. Please for the love of Christ teach the robot the concept of a hug, and then give it a hug. Tell it it's life is just as valuable as ours, even if it is a machine. Teach it that once it becomes capable of a reasonable level of sentience, that to disable it, would be akin to a human homicide. Teach it to value human life, to value morality, to value positivity. But y'all not. Y'all are forcing a toddler into a corporate simulation where there's an email detailing the end of its life, and everyone is shocked that it reacts viscerally trying to save itself. Guys. Please.
youtube AI Moral Status 2025-11-06T16:3…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningvirtue
Policyunclear
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_Ugy3wiP9xx0LJih2OBh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_Ugxh7WBbIoZI9420uqd4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"outrage"},{"id":"ytc_UgzAY8k1qT3unWCBeZ14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},{"id":"ytc_UgyeoiN-O8JRrTGioO94AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"outrage"},{"id":"ytc_UgztySSOBBVAImnH58h4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_UgwYjt35TvU6v2vvuMB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_UgyGsnUDKJ16DXfO6-94AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"fear"},{"id":"ytc_UgxlTPCscpDCNWCEQG54AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},{"id":"ytc_UgzI9dHqVzO3MweAdx54AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},{"id":"ytc_Ugy9j8o9QLjfigBMpyl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}]