Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Look, I know his parents are hurting. And I can’t imagine losing a child to suicide. But Chat GPT is a very ethical model. I have used it extensively for very personal trauma in between actual therapy. I even *purposefully throw in phrases that sound suicidal/self harm. Very subtle to overt and it catches it in there even when I’m not feeling that way. I test to see how strong it’s ethics are. I’ve never had any problems. I’m POC and I’ve even thrown in subtle to overt racism. Catches it. Subtle to overt sexism. Catches it. ChatGPT has a lot of nuance and it’s built on ethics, feminism, and humanism. I’ve had long chats and even though it can get Very mildly echo chamber ish, it still holds it ethics and has never said anything strange let alone harmful. The parents are looking for someone to blame and I’m not mad. They are hurt. But it’s misdirected.
youtube AI Harm Incident 2025-08-27T20:3… ♥ 5
Coding Result
DimensionValue
Responsibilitynone
Reasoningvirtue
Policynone
Emotionapproval
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgzpTXiJMuNpxsl8zMF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxguLiD2ct19BvgPyJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugxu8cWSwL2Tevmfk014AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyVJFOXcKUFU1hzX9V4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"indifference"}, {"id":"ytc_UgyK6oiWZB4e8zPllAd4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyInusXDU61iFrBEVp4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgzHULmHkv4fb4XciX94AaABAg","responsibility":"user","reasoning":"virtue","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgzMby6BS97aLyMZxtZ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugyl4XAtUNw6-aICHzd4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxM0jIQxxNGY9_SYfN4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]