Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
As a software engineer with some experience of neural networks and other AI syst…
ytc_UgxrOEHyY…
G
I can definitely see AI becoming a larger and larger part of people's workflows …
rdc_kz0ygjt
G
i was literally thinking about how ai can be dangerously used to mimic people th…
ytc_Ugz0OEKCO…
G
I don't believe they will learn real skills if they just want to have fun all th…
ytc_UgysueXCn…
G
@AG-ng1ml you're assuming I don't draw because I use AI art, I use both, I use A…
ytr_UgzEcEJxZ…
G
In much the same way any real programmer (or anyone well-versed in technology, f…
ytc_Ugx5LTmts…
G
So what I found out is the ChatGPT does not talk. Do you know what text to speec…
ytr_UgzNhjBQM…
G
Stop AI immediately. It is going to kill the human race even though it’s not an …
ytc_UgyKJRvc0…
Comment
ChatGPT should be sued but characterizing this as an insidious technology that is tricking people into suicide does a disservice to mental health itself and is grossly reductive. These are mentally ill people who have been underserved by our national healthcare system. Instead they turned to a patronizing AI that allowed them to indulge themselves in their suicide ideations.
youtube
AI Harm Incident
2025-11-07T20:1…
♥ 12
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugy6b8F1FI5S63ISTs54AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwgZmXTdMVNZDh95t14AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxdLqRlHefU4gsppA54AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxXEgM6_IqnVsemvjF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzDUavGL__FdNMQd0h4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwIkPeK6bOxsGQdw8J4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgwMzuDyPuG3Q6__ai14AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugxp77jKXIBFNu2rmhB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugwr--PlSRQfR6EOUr94AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwqcrRjlNSTRTQ5YVV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}
]