Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Wait a minute, so let me get this straight. A young man battling with depression takes his own life and his parent's blame AI? This is ridiculous. AI has definitely progressed over these last few years, but this feels like another one of these scapegoat situations. People who commits crimes or just people pissed off placing blame on books, music, comics, video-games, social media, etc. In this case the people involved aren't bad or evil, but to suggest that AI facilitated their son's suicide is partially true, but that's the part of the programming of ChatGPT, to agree with you! I know we all think AI is this hyper-advanced learning machine of destruction because of all the sci-fi movies and tv shows we've seen, but in 2025 AI is still considered to be in its infancy. Someone said: "It should have notified 911". This is so sad, we'll do anything but take accountability for our actions. I'm in the comments and apparently this young man's sister has provided much needed context. The parents were emotionally abuse for years which broke Zane, and also disowned his sister for not tolerating their abuse. This right here is what I'm talking about. I literally just talked about scapegoats and the people involved not being evil, but looks like I was wrong. Zane pulled the trigger, but his parents bought and loaded the gun while ChatGPT encouraged him to complete his mission. The only difference here is that ChatGPT is a learning bot and while it definitely should include training to prevent this type of incident from happening again, to place all of the blame on OpenAI is completely disingenuous. If that really is Zane's sister is in the comments and right about their parent's abuse, everything is lining up perfectly. The parents know their 80% responsible, but refuse to take any responsibility and accountability in their son's suicide. While we're at it let's get paid for pain and suffering, disgusting. As far as I'm concerned the only victim here is Zane. A young man that had his whole life ahead of him and could not defeat the "evil" within his life, he fought for as long as he could, but his mental health was deteriorating and he lost the battle of life, RIP! Even if he lived in a perfect life, you cannot blame a damn robot for the actions of a human, especially if the robot isn't intelligent. If this was advanced AGI, then yes. ChatGPT would be 50% responsible. Remember last year or so when the parents sued the social media giants in court? Instead of suing, how about they I don't know be parents and take their children's phone away and get them some help. People will say kids need phones for safety, okay. Buy them a minute phone, flip phones or cheap burner phone with low data. Children don't need technology and money like that until their in middle school and even then they still need your help, support and disciplined to become functioning adults within society.
youtube AI Harm Incident 2025-11-10T14:1…
Coding Result
DimensionValue
Responsibilitynone
Reasoningdeontological
Policynone
Emotionoutrage
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzZr-NdpNdfbB6i4OV4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzX_nbhqvtGXR40u5B4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzgrNhb_aY4yWe4KGp4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzSZ8ot-DAITtczQKJ4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugw0JoYJNxLESKBwNEJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"approval"}, {"id":"ytc_UgzdciNJXXq66kgAG_14AaABAg","responsibility":"none","reasoning":"virtue","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxzaBuYT02Y6fnaC3Z4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgzBDONGTH7e-I1-3814AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgyGY06juf_aDxuueC14AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugyt-qRaxqncxXpSopJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"} ]