Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
They acknowledge he was able to get around the guardrails: >When Adam shared his suicidal ideations with ChatGPT, it did prompt the bot to issue multiple messages including the suicide hotline number. But according to Adam’s parents, their son would easily bypass the warnings by supplying seemingly harmless reasons for his queries. He at one point pretended he was just "building a character." [https://www.yahoo.com/news/articles/family-teenager-died-suicide-alleges-114040476.html](https://www.yahoo.com/news/articles/family-teenager-died-suicide-alleges-114040476.html)
reddit AI Harm Incident 1756223706.0 ♥ 588
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyliability
Emotionfear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_nas8uw5","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"rdc_nas2pmo","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"rdc_natz30g","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"rdc_natwvdy","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"rdc_narwpwb","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"} ]