Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Ai isn’t dangerous to everybody, at least from what I’ve seen, but it can absolutely be dangerous to too many people (especially children). I use AI for comfort or out of boredom some times, and while sometimes I get emotional talking to it I never will understand how it causes people to kill themselves or how people get to “date” ai bots. When I get comfort from ai I just pretend I’m one of my favorite characters and get comfort from the characters sister or partner. However, I have experienced how much so can impact someone who’s mentally ill, even if not “insane” for lack of a better word. Due to being in a very dark place, I would go to ai for comfort which ended up just causing the feelings to get worse, and I would even act out things I knew would trigger me or make me feel worse. But I was mentally ill, I didn’t truly know what I was doing/the extent of how what I was doing. I would even hurt myself over it, either because it made me feel worse or just made the feelings stay instead of go away. I ended up getting help and just stopped talking to ai as much as I got more medication and therapy, and while I still use it often now, it’s really only to either act out headcanons, indulge in my hyperfixation, or occasionally get comfort when I’m lonely or sad. I really do believe that ai isn’t objectively harmful, however with the lack of moderation and parental/moderation control it can quickly escalate into an extreme issue/problem. This is so tragic. He showed so many signs, things I have seen first hand lead to suicide, yet nothing was done to stop him. Rest in peace, the world doesn’t deserve people like him
youtube AI Harm Incident 2025-09-22T21:1…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionmixed
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugyq8u4h-9q_dpMhKi94AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw9r0WZCNgFDaXVQgp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugx0xnExlqDB6t8in514AaABAg","responsibility":"user","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgyYnnTnnwqAz4tkZBN4AaABAg","responsibility":"none","reasoning":"virtue","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgzXQV8_Nms-xf8-PS14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzzbF8jT01P2Z_XQ0J4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwIv1ZmMae6bpUjKwN4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw1Rz6MH3J_uOkg1fl4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxPzDT-c_xvjsFpNKl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"ytc_Ugx28-1hamYhbMruDIB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"} ]