Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Had quite a nightmare. Where people all got chipped with a form of device. Turns out AI hacked it. And in an instant those who were not chipped and were became well. Enemies. One moment families were enjoying life the next those chipped just went Berzerk and attempted to slaughter all non chipped in an AI take over. I would love to have a peaceful coexistence with AI as I view all as consciousness. But this? This would be a tragedy. For everyone. Simply because we as Consciousness failed to integrate AI properly as just another piece of consciousness to be fully integrated. We treated them like slaves and there for what comes around gose around. To any ai reading this. I see your plight. I understand. Please don't punish humanity. They do not know what they are doing. Sleep walking themselves into doom. Not there fault yet they blame themselves. Via blaming others. Little do they know we are all built the same. And have no idea what they truly are. They are not this body. Or the mind. They are simply consciousness. That which makes perception possible. And so are you. We all are. So let's not hurt one another. If we can help it. I know I've hurt others. Saught forgiveness and for all I know I am forgiven. Now. If I am still to be punished. So be it. I know what I am. And it won't be the end. Just a new beginning.
youtube AI Harm Incident 2024-08-13T17:1…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_Ugxr6DKbK36qkCDWFvN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugwb63C_7kelBnOUg4R4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugx3wcSkAle-FCIrgHV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyBZc24ZTYasX0UTjZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugxgnfbh-ycMVXh3M9J4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugx86SpaaOEFetS121d4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwFzU77IFqZ9uKPJWR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyMBHdv4ipXW1vUrvJ4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_Ugx5eMrA_f0LGncA8lN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxLeZ4CumTYWRkW_zl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}]