Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I disagree with the idea that AI or OpenAI is to blame for tragic situations involving users in mental health crisis. These individuals were already struggling, and there are clear warnings on every major AI platform not to treat it as a doctor, therapist, or lawyer. Like any other form of media, it is up to the user to apply critical thinking and use the tool responsibly. In my experience, AI has been life-changing in a positive way. For years, I faced complex medical, legal, and disability challenges that no professional could fully explain or coordinate. By using AI as a research guide—while verifying everything through real medical science, peer-reviewed sources, and official legal regulations—I finally uncovered the root causes of my spinal degeneration and autonomic dysfunction after more than twenty years of being overlooked by the system. Blaming AI for human misuse is like blaming Google because not everything on the internet is factual. The problem is not the technology itself but how people use it. I rely on AI as an ADA accommodation to help me process information, organize documentation, and communicate effectively. I also use it for creative work, but I know the difference between creative exploration and actual research or treatment. For me, AI has not been harmful—it has been life-saving. It provided educational and organizational tools that I desperately needed and that no one else was willing or able to provide. The real tragedy is when people misuse a powerful tool out of confusion or desperation, not when a tool like this exists to help those who use it responsibly.
youtube AI Harm Incident 2025-11-07T20:0…
Coding Result
DimensionValue
Responsibilityuser
Reasoningvirtue
Policynone
Emotionresignation
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugy6b8F1FI5S63ISTs54AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwgZmXTdMVNZDh95t14AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxdLqRlHefU4gsppA54AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxXEgM6_IqnVsemvjF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgzDUavGL__FdNMQd0h4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgwIkPeK6bOxsGQdw8J4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgwMzuDyPuG3Q6__ai14AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugxp77jKXIBFNu2rmhB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugwr--PlSRQfR6EOUr94AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgwqcrRjlNSTRTQ5YVV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"} ]