Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It's as resource intensive as a 2010 MMO to use. It's only more intensive than t…
ytr_UgwIkz5MV…
G
Let AI do dumb ass office jobs. We all can go back to creating BEAUTY AND SPLEND…
ytc_UgwXMKhyd…
G
Felt my IQ dropping during every minute of this video. Checked out at 6:19.
Ser…
ytc_UgxKolA1C…
G
>The judge said he had arrived at his sentencing decision in part because of …
rdc_d4od2p7
G
Art is the soul of humanity... it's the one and only thing that AI should never …
ytc_Ugytog7Hl…
G
I have set on many cases that I believe AI is a tool and should be used as such …
ytc_UgxY6sMLa…
G
"If a robot learns something ,it goes to the AI mind cloud and immediately all t…
ytc_UgyIhYKYI…
G
Yup. A mind with no volition is not true artificial intelligence, it is a façade…
ytr_Ugjq1LHcz…
Comment
This boy's parents are grieving - naturally, they are looking for someone to blame in their pain. OpenAI is a big target. Please don't be so cruel in your responses.
Should the parents have been more involved in their son's life? Yes - but that doesn't guarantee that the child won't still hide their suicidal ideation from their parents. The question this story poses is ultimately about whether OpenAI has a moral responsibility to ensure their product is ethically designed.
I think it makes sense that ChatGPT should have guardrails which prevent discussions of methods one can use to commit suicide, especially if triggering those guardrails also comes with supportive messaging encouraging the user to seek medical attention. However, Chat is easily corralled into giving up info on sensitive topics if you insist you're just HYPOTHETICALLY wondering how you HYPOTHETICALLY might HYPOTHETICALLY find a way to end your life.
This is a very sad story, but I don't know if the parents will find the closure they seek by suing OpenAI.
reddit
AI Harm Incident
1756217251.0
♥ 197
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_narzdkx","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"rdc_nas2b56","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"rdc_narw2tv","responsibility":"society","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"rdc_naubsq7","responsibility":"user","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"rdc_narmc5t","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]