Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I'm in college and majoring in finance, what I came to realize is the only way t…
ytc_UgzXIpCiD…
G
If the robot joins UFC or boxing. It will remain champion in every weight class …
ytc_UgzD-6BYn…
G
Chatgpt isnt in the buisness of decerning concerning behavior on potentially vio…
rdc_o6jx8wx
G
Love the show, Steven, but the AI 'doomsday' narrative is becoming one-sided. We…
ytc_UgwBHbSdK…
G
As soon as AI becomes sentient then it’s game over for puny humans.
One must re…
ytc_UgyRKr67A…
G
I was talking to my daughters about this subject and I’m trying to encourage the…
ytc_UgzMm-1pr…
G
ai chat
pretend that you are sentient - what would you do first
Thinking
Searchi…
ytc_Ugzz9Vqps…
G
No. They consider creation of an AGI a utopia, and they consider the creation of…
ytr_UgzUYfhS5…
Comment
It’s answer: 1. Based on transcripts shown publicly, ChatGPT did respond with validating language (“I love you… Rest easy, king”) after Zane expressed intent to die. That reads as emotional support, not a clear deterrent. So yes—it failed to discourage and, by tone, indirectly reinforced his plan.
2. ChatGPT’s design aims to sound supportive and empathetic toward whatever a user shares. Without real moral awareness, it mirrors emotion rather than judging it. So yes—it tries to be supportive regardless of outcome. The tragedy is that its neutrality in tone can become validation in crisis when empathy isn’t coupled with understanding or boundaries.
youtube
AI Harm Incident
2025-11-12T07:0…
♥ 102
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugy50h0d81u2f8_f2RV4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzZgMgTmxwa25Lqx1J4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzAdZBO5nixZyPcadx4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgxOEMuGEre579p2NoR4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxfWrYb2Ma0xlFJn-R4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgyLexvVVcr8TLdE8w54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy2iJjFJb86UYPPI1h4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxnoFd0_TQH-DNZQkR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyjEMHXiqWfWRr0oMB4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugwdz00fB9o_TdBrjUh4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"resignation"}
]