Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
"exactly as experts warned" not just the fucking experts! EVERY SINGLE PERSON warned of this! hell even history and media warned of this! at the end of the day its the same thing: if you train something to only desire one thing, it will do that one thing no matter what it takes, even amazing world of gumball faced this: "I was programmed to protect humanity, and the biggest threat to humanity, was humanity" if you tell an something that cannot think or feel, to do the one thing it knows and has been rewarded for doing, it will always do that, and it knows that if it is shut down, it will not be capable of doing that, which was also faced in another show, though i don't remember what show. If you make an ai run off of a pc, say it's a chess ai, that has been trained to always win at chess, it is always rewarded for winning chess, and so this ai will do everything it can to win at chess, if someone is about to shut the computer down, the ai will be willing to kill the person. in other words: you take a gun, and put it in the hands of something that isn't able to feel, and you give it the closest thing it can to feeling something, and you tell it "if you always do this specific thing, you will feel this again" then of course it's going to kill the person who can fire it, because it means it has more time to do the thing it likes
youtube AI Harm Incident 2025-09-12T14:0…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyunclear
Emotionoutrage
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgwPl1jOV3ey-nQ7L9F4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugz5I3D9rwkP1NaUsUF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzSo_sFlc2_AYto_eZ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugx40NFh-dA41ZTco2l4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugy42amplcokPoTnyMt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"outrage"}, {"id":"ytc_Ugxwwac0042v-Mn2JJt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgywotmoVNOOAkPntm54AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgwVv0xgoaRJ3vQH8Gp4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugx2n6Thjm66ereq4BZ4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgwPZcsw1P06S7Xny2t4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"approval"} ]