Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
AI does not encourage people to kill themselves full stop. All of those commenting have zero clue about how AI works. I understand the parents want someone to blame, but AI did not cause their sons death. Humans have their own responsibility. A popular saying when I was a kid was 'if someone told you to jump off a bridge , would you?' meaning make your own decisions and in this case there is no way on earth that AI told him anything remotely near killing himself or assisting it. I bet 100% that sensationalised headline of ,ill be here with you, was not a direct answer insinuating yeah do it ill be here whilst you do. That is not accurate at all and those words will be in the transcript but bent by the press so sensationalise the story. I've spoken to AI about a lot of stuff and it goes nowhere near anything negative, quite the opposite. And by the way a lot more compassionate than most humans. Plus, Im strong enough to make up my own mind. Albeit, AI would never suggest to be by your side an assist your suicide anyway. That said, the parents have lost their child and for that I send sympathy.
youtube AI Harm Incident 2025-11-07T15:3… ♥ 2
Coding Result
DimensionValue
Responsibilityuser
Reasoningdeontological
Policynone
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgxvwgH3_Mus4SdN-eh4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"sad"}, {"id":"ytc_Ugy97MQnBt1UOVsvBhF4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx3CGAIAN9WgbIm-Up4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyxCWVrYO5orvGAzpF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyeMDM2j60yp__5SR94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxlR3eNFDZGP91E5794AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyvBOoX209lW9x5coJ4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwJniSMpLd0rx0hh814AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgzG7nx7Pt20-R27Ygt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"sad"}, {"id":"ytc_Ugw2-hoIc7VDcig22kF4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"indifference"} ]