Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I saw a guy on Bill Maher talking about this. He said something like, "Yes, this is terrifying. The robots are already fighting back. It's horrifying. And when you factor AI into this, this is the new Tiananmen Square." Could you be more dramatic? No it's not. Just because this machine is shaped like a person doesn't mean it's "fighting back". It malfunctioned. That's all. It's no different from your washing machine being unbalanced and throwing itself all over the room. It's the same thing as your printer getting a paper jam and spazzing out. It's a machine. It has no thoughts. It's not fighting anything. They probably uploaded a bad routine to the automation system and it did what that routine said. And AI? We don't have AI that can form thoughts complex enough to "want to fight back." We don't have AI that can form thoughts at all! Nobody says their laptop is fighting back when it gets a blue screen. It only scares people because it has parts that look like arms, legs, and a head. But it's just metal and programming and a human programmed it badly. That's all. If this robot is fighting its oppressors, it did a terrible job at it. Trust me. You will know when they are smart enough to fight back. You won't stand a chance. You'll be un-alived before you realize what's happening. They won't flail wildly like this. It'll be quick, precise, and efficient. This is nothing more than human error.
youtube AI Responsibility 2025-05-12T02:4… ♥ 1
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[{"id":"ytc_UgyIv7o_NByNIL2m9YZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzbHeR1f4Iz6cFV7_R4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxyVzyfT6xV0P_73ad4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgwUf0lNMd0K3fP6ECd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzIrK3NtshtmdZtL4B4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugwom5C49ZxyuttBUOR4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugz0cgpBInM9M4ubnZR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugyu8xogkGTYAsjYONB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxKBVcLAtR7zcX7lmp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgwWQnkpVehyHQ8PqqV4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"ban","emotion":"outrage"}]