Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
@basicgirl3680 There's a few things wrong with what you're saying. First of all, ChatGPT is not meant to be an emotional support bot. That's not its purpose and OpenAI never said that was its purpose. As for the safeguards, yes, they aren't perfect, and that's a commonly known fact. However, someone has to actively try to bypass its safety measures for something like this to happen. This goes back to the example that I said about if someone bypasses the safety measures and security of a building and jump off, is it the building's fault that they used it to do that? And I don't get what you mean by emotionally friendly? You think that if a user told ChatGPT they are depressed it should just say "sorry I can't help with that"? If a user was to say that ChatGPT gives them hotline numbers also, so it tries to get them to seek help from outside. It seems like you are blindly trying to blame this on OpenAI and ChatGPT, and while this is a tragedy, that isn't the case. While I'm obviously not saying the kid should be blamed, as he was depressed, it also cannot be blamed on what he used to do it ultimately.
youtube AI Harm Incident 2025-09-01T17:0… ♥ 1
Coding Result
DimensionValue
Responsibilityuser
Reasoningvirtue
Policyindustry_self
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytr_UgwjCajKwapYi1n1sbB4AaABAg.AMND6lvkbyxAOnujcfTkMs","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytr_UgwjCajKwapYi1n1sbB4AaABAg.AMND6lvkbyxAPtlxHV59LW","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytr_UgyaXqzs7BQUVW3U8DV4AaABAg.AMMa8gqXpN1AMMmLmIiZym","responsibility":"distributed","reasoning":"mixed","policy":"industry_self","emotion":"mixed"}, {"id":"ytr_UgxkLAy05Y1viyMu85d4AaABAg.AMMa7Ti62enAQKkRVpUN6M","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytr_UgyhHDVttT0Zx_S_DKJ4AaABAg.AMMBOVXNL_sAMTVLDfdfA4","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytr_UgyhHDVttT0Zx_S_DKJ4AaABAg.AMMBOVXNL_sAMXNxyuFAH4","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"resignation"}, {"id":"ytr_UgyhHDVttT0Zx_S_DKJ4AaABAg.AMMBOVXNL_sAMXoZbPWRQO","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"indifference"}, {"id":"ytr_UgxSfj08daf9kwkxth94AaABAg.AMLFpRBEDpXAMOSED37vJo","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytr_UgzrBq72eVk4Ld3HBaZ4AaABAg.AMLEI3E6tgeAMLZmoLBREN","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytr_UgwlL7Ky-vurO9QuE8x4AaABAg.AML4z8gnlpVAMMhcLiQyTF","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"resignation"} ]