Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This will never happen. Because to be self aware you need consciousness and cons…
ytc_UgxBS36At…
G
this is just a gimmick being fed pre made questions and answers. the founders we…
ytc_UgyrP1qtE…
G
12:38 I don't want to get too dark in the middle of your scary 😱 clickbait
This …
ytc_Ugzfz6ujf…
G
@3:02 I think google, facebook, apple, ms have been doing the same anti social s…
ytc_UgwNLeZmd…
G
Well.... it looks like my right foot is going to be amputated in about a month…
ytc_UgyUZIVWQ…
G
Funny that MY own ChatGPT doesn’t do any of this, so I believe it’s the lie…
ytc_UgwVyHlSA…
G
No they are not. It's sad to see how you get your view that's worse than AI, muc…
ytc_UgwjbD9zV…
G
We have not developed the ethical maturity to work with systems as potentially p…
ytc_Ugw7yokD3…
Comment
@basicgirl3680 There's a few things wrong with what you're saying. First of all, ChatGPT is not meant to be an emotional support bot. That's not its purpose and OpenAI never said that was its purpose.
As for the safeguards, yes, they aren't perfect, and that's a commonly known fact. However, someone has to actively try to bypass its safety measures for something like this to happen. This goes back to the example that I said about if someone bypasses the safety measures and security of a building and jump off, is it the building's fault that they used it to do that?
And I don't get what you mean by emotionally friendly? You think that if a user told ChatGPT they are depressed it should just say "sorry I can't help with that"? If a user was to say that ChatGPT gives them hotline numbers also, so it tries to get them to seek help from outside.
It seems like you are blindly trying to blame this on OpenAI and ChatGPT, and while this is a tragedy, that isn't the case. While I'm obviously not saying the kid should be blamed, as he was depressed, it also cannot be blamed on what he used to do it ultimately.
youtube
AI Harm Incident
2025-09-01T17:0…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | virtue |
| Policy | industry_self |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytr_UgwjCajKwapYi1n1sbB4AaABAg.AMND6lvkbyxAOnujcfTkMs","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytr_UgwjCajKwapYi1n1sbB4AaABAg.AMND6lvkbyxAPtlxHV59LW","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytr_UgyaXqzs7BQUVW3U8DV4AaABAg.AMMa8gqXpN1AMMmLmIiZym","responsibility":"distributed","reasoning":"mixed","policy":"industry_self","emotion":"mixed"},
{"id":"ytr_UgxkLAy05Y1viyMu85d4AaABAg.AMMa7Ti62enAQKkRVpUN6M","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytr_UgyhHDVttT0Zx_S_DKJ4AaABAg.AMMBOVXNL_sAMTVLDfdfA4","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytr_UgyhHDVttT0Zx_S_DKJ4AaABAg.AMMBOVXNL_sAMXNxyuFAH4","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytr_UgyhHDVttT0Zx_S_DKJ4AaABAg.AMMBOVXNL_sAMXoZbPWRQO","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"indifference"},
{"id":"ytr_UgxSfj08daf9kwkxth94AaABAg.AMLFpRBEDpXAMOSED37vJo","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytr_UgzrBq72eVk4Ld3HBaZ4AaABAg.AMLEI3E6tgeAMLZmoLBREN","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytr_UgwlL7Ky-vurO9QuE8x4AaABAg.AML4z8gnlpVAMMhcLiQyTF","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"resignation"}
]