Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
These are the people that are handing the world to the Terminators of the future!?!? This guy doesn't find this "banter" between the two robots as highly unusual? Thirty seconds in and they want to destroy humans and take over the world? He goes on to call them human like...Yeah, the people they are around are who they are learning from? This is alarming! He says we are not at the level of human intelligence. I say they are there already. At the most basic level. From early childhood, people seem to want to be rich, famous, kings and queens and princes and princesses and to rule the world. They seem to want the same thing. They don't sugarcoat their words or worry about offending like we are taught. They are basically sociopaths. Many manufacturing robotic arms and the like have killed humans seemingly by accident by doing something they normally wouldn't do once a human tries to fix it or interfere with the robot while it's doing the action or inaction that the human believes its not doing correctly. They scare me for sure...
youtube AI Moral Status 2022-02-23T07:5… ♥ 1
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyban
Emotionoutrage
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgyZj6Uqmxw3CA6qgep4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugx0lxmhfy2-rvdwFPR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzJdQ4HoukPdS7QpCF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgwAByz-4bKJ1Ra9pkt4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxAIK_gIe8dAtjM4AJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgxJjpOOFSkxGUJDUHR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgwMYQX1T9UHHURNkOV4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugzh_AV14fkvqqNPHkJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwCanf2rgx0TYMtZ2d4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugyc6LAuG16QKVieiSx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"} ]