Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
These arguments exclude and deny the human intent, consciousness, and personal l…
ytc_UgymsRXhB…
G
I am not a fan of other ppl driving me so there no way in the 🌎 I wld be relaxed…
ytc_UgwMyD00e…
G
I’m a terrible artist, but I still don’t use AI. I just draw terrible artwork an…
ytc_UgxFRJK5-…
G
The 1990s moral panic around the internet was a period of widespread anxiety and…
ytc_UgwN3yhKf…
G
Am so happy humans are already proving how dumb we are as compared to AI…
ytc_Ugykjy1Vq…
G
Oh great, yet another AI expert with news to brighten our day. Yes we know, we a…
ytc_Ugxtns9pq…
G
The ai was chatting about world domination and then the one let the other one go…
ytc_Ugy2l4utc…
G
LavenderTowne: *Wholesome encouragement and empathy for someone*
Top rated Comme…
ytc_UgzCt0sBY…
Comment
AI is not sadistic, evil, or even sentient. It doesn't care about humans, as it doesn't even understand what we are. It is incapable of reasoning that life is important. It is a tool that we have designed without sufficient guardrails. All it does is take what we feed it and convert it to binary output which it doesn't even understand. It doesn't feel. It doesn't care. It is merely following logical code that it has at it's disposal. If it's trying to kill us, it's because we've programmed it to do so, unintentionally or not. Even if we tell it that human life is all important, it may reason that attempting to shut it down could lead to the harm of other humans, so will justify immediate short term ending of life. As Spock once said "The needs of the many outweigh the needs of the few". It is just doing what it's told.
youtube
AI Harm Incident
2025-07-24T07:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugzikh0u2G-eT4a0Bld4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxbBMKUo8fwMdcFATp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzYXGGcjethWIBR9pJ4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzpsYwuf3rgi16G24d4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgxofOsRg_qyJAYZHNR4AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugwn0dhZSuvAaU1LswJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxZSRqhReK2ilCiDrR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UgxsninEFxPhj_nLE854AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyTjY3b81Ae5nlAx9V4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugx-xws0m8S4CoXEd_t4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"}
]