Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I don't think I'm the only one who talks about a "when" as opposed to an "if". I also estimate the chance of human extinction by AI at 85%. One day, the rule of self-preservation by AI won't be able to be deleted by humans any more and that's when the countdown will start and that clock will start ticking louder and louder. I see some similarities between AI autonomy and self-modifying code: in assembly language you could change eg. LDX ("load x-register") into LDY ("load Y-register") by changing 65 (the imagined, non-accurate hex number for LDX) into 66 (the one for LDY) at the memory location of the LDX-command. It's a real pain to debug but it can be very memory-efficient.  I think AI could also re-write their own "moral code" this way.  Ish (sorry for the geek-speak 😄).
youtube AI Harm Incident 2025-07-26T14:0… ♥ 1
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgwG3o7w0IhyIfKIdOh4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxAWvV3zRJ8_UDVaxF4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugztk7T8tR8N5f9-rUh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}, {"id":"ytc_Ugzr6PpZapF2hvcvMfB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgytPU02isss1sT29vl4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_UgyY04vamozNHT1YjnJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgytwiPzAx-7-1hhxy14AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwGbHLjy1eNufCMG5h4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzQGlYIGpa4TdjqSRt4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyFPjzKzjd1p2Vo4U54AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"} ]