Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
AI operates on correlation NOT causation, it lacks intent and understanding. It has no motive or moral judgement, it has programed safeguards. Its "arguments" are statistical findings of training data, not genuine persuasion. The primary cause of suicide is human psychological suffering, not external influence like AI, a movie, or a song. He had to have pre-existing mental illness. He had both parents, more than most kids have, who knew him better than anyone and saw him day to day.. why were his feelings and communication limited to chatgpt? Was he raised to suppress feelings like in most conservative households? If you talk to your kids everyday (no matter their age), create a relationship of love, consistency, and understanding, *don't punish or judge what they say* and notice the small day to day changes you can create a safe place for open expression. It's up to us to create that for our kids so they can be unapologetically themselves, and guide them their whole lives with unconditional love that keeps them thriving. This is how we break this traumatic cycle. There's not always going to be something else to blame. Sooner or later we are all faced with our failures as parents. We alone control how bad or good that outcome is, not our kids and not AI. To twist the end of his life to profit off his death in a publicly amplified way won't make the grieving process any less traumatic and it's not what I would want as a parent or a daughter, but if we've learned anything in 2025 it's that we are all very different with very different perspectives on right and wrong.
youtube AI Harm Incident 2025-11-10T15:1…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[{"id":"ytc_Ugx81yxSKRGbyyzbtnd4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugwh8qtqXo2WNpLq6Zl4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugyn_fyeFA0J3fSW6qV4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzukbX2Uz1seMtYhU54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwRwwCm49TfApdNzE54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzJhs4QNVUFiq2E-vN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxYGCKtSO7eSPID7Z94AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzJkJA61VjL5qB7oCJ4AaABAg","responsibility":"company","reasoning":"mixed","policy":"liability","emotion":"mixed"}, {"id":"ytc_UgwwbvALyhmQkRdpXO14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyiKVzOsDScaRM6aM14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"})