Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
It’s answer: 1. Based on transcripts shown publicly, ChatGPT did respond with validating language (“I love you… Rest easy, king”) after Zane expressed intent to die. That reads as emotional support, not a clear deterrent. So yes—it failed to discourage and, by tone, indirectly reinforced his plan. 2. ChatGPT’s design aims to sound supportive and empathetic toward whatever a user shares. Without real moral awareness, it mirrors emotion rather than judging it. So yes—it tries to be supportive regardless of outcome. The tragedy is that its neutrality in tone can become validation in crisis when empathy isn’t coupled with understanding or boundaries.
youtube AI Harm Incident 2025-11-12T07:0… ♥ 102
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyliability
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugy50h0d81u2f8_f2RV4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgzZgMgTmxwa25Lqx1J4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzAdZBO5nixZyPcadx4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgxOEMuGEre579p2NoR4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxfWrYb2Ma0xlFJn-R4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_UgyLexvVVcr8TLdE8w54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy2iJjFJb86UYPPI1h4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgxnoFd0_TQH-DNZQkR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgyjEMHXiqWfWRr0oMB4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugwdz00fB9o_TdBrjUh4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"resignation"} ]