Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I’m okay with this ! Hell yeah more robot ! Who care anymore ! We literally live…
ytc_UgzKvlm4i…
G
I swear Paul is some kind of psychic because click is a movie i think about lite…
ytc_Ugx-EN0M6…
G
Waiting for biologists to actually get a voice in AI discussions. Physicists are…
ytc_UgxAljwwU…
G
AI IS inevitable it's not even a real thing to debate. There is ton of profit - …
ytc_UgxcSWQwu…
G
predictions are complete rubbish. please comment when reading this in 2027 and a…
ytc_UgxzES_D9…
G
AI is marvelous for who is allowed to be left living. Its going to be beneficia…
ytc_UgzrBTXzT…
G
I cannot even begin to fathom the pure frustration that AI has brought to these …
ytc_Ugx8n_nFN…
G
We appreciate your engagement with the video. While we encourage lighthearted co…
ytr_UgyYXSuLZ…
Comment
I feel for the family, but I want to share a different perspective based on my experience using ChatGPT. I often use it to research topics and summarize information, which saves me from endless Google searches. It’s been a helpful, task-oriented tool that assists with work, writing, and daily problem-solving — not something that causes harm.
OpenAI should, however, include a clear disclaimer stating that people struggling with mental health issues should seek professional help, and that ChatGPT is not human — it’s simply a program designed to assist and provide information.
I’m deeply sorry for the family’s loss, but I don’t believe ChatGPT is to blame. Sadly, their son was struggling with something much deeper, and this tragedy could have happened regardless of the tool he used. I’ve known many people who have taken their own lives, long before ChatGPT existed. Mental illness is a serious and complex issue — and blaming technology oversimplifies something profoundly human.
youtube
AI Harm Incident
2025-11-12T02:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_Ugy0_6HEFl_b5O8zf5d4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxcx1D58v-Xnct9ttp4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwrmZPVthnIMfLBTMJ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzxCK_igvK-pNIqdeN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugyjg-ScLvEWfiyYuWl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"approval"},
{"id":"ytc_Ugx2nKCsEsSEXKH3p9F4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyHB_Xe894dBuceB054AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyfhPkBE61G6cR1V7l4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxCQtuTCSdy7CXY8Nt4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy3bahUuH-SQpzf7YZ4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"regulate","emotion":"mixed"})