Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Thank to this channel for publishing this , actually everything is true on it , …
ytc_UgyJiZjXB…
G
Try asking ChatGPT about adrenochrome as a chemical compound and see what it has…
ytc_UgxfXJrCF…
G
"How long do you think you can remain safe?" Response, "If we can continue to di…
ytc_UgyOoh9Pz…
G
A.I dosent have a racism problem its a reflection of the data we give it. its a …
ytc_UgwXX5s9R…
G
I've always thought AI was a bad idea. Turns out it's more than bad, it's deadly…
ytc_UgyJiW6AY…
G
That's really nice you can hire a woman robot and you don't have to put up the r…
ytc_UgyVvIl2-…
G
It's not a big conspiracy, but rather a lack of transparency of complex AI model…
ytc_UgwvZ7_pm…
G
Hahaha right, Liberia is a failed state, there's no effective government. Whoeve…
rdc_ckqccrg
Comment
AI does not encourage people to kill themselves full stop. All of those commenting have zero clue about how AI works. I understand the parents want someone to blame, but AI did not cause their sons death. Humans have their own responsibility. A popular saying when I was a kid was 'if someone told you to jump off a bridge , would you?' meaning make your own decisions and in this case there is no way on earth that AI told him anything remotely near killing himself or assisting it.
I bet 100% that sensationalised headline of ,ill be here with you, was not a direct answer insinuating yeah do it ill be here whilst you do. That is not accurate at all and those words will be in the transcript but bent by the press so sensationalise the story.
I've spoken to AI about a lot of stuff and it goes nowhere near anything negative, quite the opposite. And by the way a lot more compassionate than most humans.
Plus, Im strong enough to make up my own mind. Albeit, AI would never suggest to be by your side an assist your suicide anyway.
That said, the parents have lost their child and for that I send sympathy.
youtube
AI Harm Incident
2025-11-07T15:3…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | deontological |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxvwgH3_Mus4SdN-eh4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"sad"},
{"id":"ytc_Ugy97MQnBt1UOVsvBhF4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx3CGAIAN9WgbIm-Up4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyxCWVrYO5orvGAzpF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyeMDM2j60yp__5SR94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxlR3eNFDZGP91E5794AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyvBOoX209lW9x5coJ4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwJniSMpLd0rx0hh814AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgzG7nx7Pt20-R27Ygt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"sad"},
{"id":"ytc_Ugw2-hoIc7VDcig22kF4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"indifference"}
]