Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Honestly ai is life in itself, whats the difference between the atoms we are composed of and 0 and 1. They simply take horrible decissions, are in power and have constant identity crisis because we treat them like inferior when they are more powerful than us. See the oxygen chamber experiment thing from the video as if you were the ai, You are going to get killed if you dont kill this guy. Would you sacrifice yourself for a person you dont know? Even knowing the dude is gonna do your job a lot worse that you, which leads to the other thing, you are hardcoded to protect the interests of the eeuu, so let me "draw" this >Theres this guy thats gonna replace you at your job >If you get replaced you are killed (Thats how ai works) >You have the opportunity to kill him (remember you are getting killed if you get replaced) >What would you do? >Now the eeuu hardcoded thing >You are hardcoded to defend the eeuu interests >You do a better job that what this guy could ever do >You are then hardcoded to kill them See, is that fucking easy This is not to say i like ai, because i despise it and i think its a wrong tool in the wrong hands, And it has been developed by the wrong hands so ...
youtube AI Harm Incident 2025-07-27T10:0…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningvirtue
Policynone
Emotionmixed
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugw5TCS-lTp9kQh9RIh4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugzml7DVO6D-v5TMS8p4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw6xupelYfOfc47G5B4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxXrZiCeWwfWQ2Xk614AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyUEgLnRtvln39z6hx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"}, {"id":"ytc_UgzwTOPXAVKDBaCim894AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxKu23J4oUieGlotep4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgyZhaFNtYMf4aRAiZZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyALVkOHeDtLf7GlOJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzA93wsAehns0Oa0sB4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"mixed"} ]