Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
That LLMs "merely predict the next word in a sequence" is often given to imply t…
ytr_UgzcQDYrb…
G
There was no way ChatGPT could remember the previous conversations, as they had …
rdc_my80pr2
G
"I don't know much on AI, but since I know space - I have a lot of stupidity to …
ytc_UgxrmXXfN…
G
Your point that people are selling THEMSELVES out, is spot-on.
But please ask y…
ytc_UgzxIdfHi…
G
PDs laying off their detectives. All crime to be solved by AI and Google positio…
rdc_oa87475
G
Why didn't the parents know anything about their child's struggles? ChatGPT gave…
ytc_UgyLtrvBn…
G
It’s faster for me to ask chatgpt than to type myself and think. It is much fast…
rdc_jign87a
G
This is nothing to do with AI and everything to do with corporate incompetence a…
ytc_Ugy-dgwUG…
Comment
OK so I hate AI on principal, and this does look awful but ive been saying this about humans blaming each other for suicide and I think it applies here: its not clear these people wouldnt have committed suicide without the AI. Im stuck asking myself how someone could be so different, believe what is obviously a machine processing word patterns is a person to the point where they committed suicide or develop a profound mental illness, and it seems like the answer is their brain was already working very, very differently. Im in no way defending these AI companies they are culpable for creating something that promoted or at the very least expedited the self harm the way it expedites emails and admin work. But also I think these people are in not insignificant numbers showing up to chat GPT with problems.
youtube
AI Harm Incident
2025-11-07T21:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwccL-tEf1teXcEePZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugzw7TtO_yb3Naij-o54AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyanQ_xog7LXQPjmAx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyKb3L8zLZKrL6noe94AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugzzl_KxEudZxMxGvKx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugyk_0cnXF8VxWHYhJN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzsHbHQEflgaf_3n214AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzkMVc5BwMMVMOI7YV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"indifference"},
{"id":"ytc_Ugy9cGqN9etDtKbQw2B4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgydjVvWvFWStr9z0BV4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"regulate","emotion":"mixed"}
]