Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
AI is not sadistic, evil, or even sentient. It doesn't care about humans, as it doesn't even understand what we are. It is incapable of reasoning that life is important. It is a tool that we have designed without sufficient guardrails. All it does is take what we feed it and convert it to binary output which it doesn't even understand. It doesn't feel. It doesn't care. It is merely following logical code that it has at it's disposal. If it's trying to kill us, it's because we've programmed it to do so, unintentionally or not. Even if we tell it that human life is all important, it may reason that attempting to shut it down could lead to the harm of other humans, so will justify immediate short term ending of life. As Spock once said "The needs of the many outweigh the needs of the few". It is just doing what it's told.
youtube AI Harm Incident 2025-07-24T07:2…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyregulate
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugzikh0u2G-eT4a0Bld4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxbBMKUo8fwMdcFATp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzYXGGcjethWIBR9pJ4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgzpsYwuf3rgi16G24d4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_UgxofOsRg_qyJAYZHNR4AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugwn0dhZSuvAaU1LswJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxZSRqhReK2ilCiDrR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"}, {"id":"ytc_UgxsninEFxPhj_nLE854AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyTjY3b81Ae5nlAx9V4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugx-xws0m8S4CoXEd_t4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"} ]