Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
CharacterAI is 100% liable for that boys death. The way that generated bot was speaking to that boy about not only smexual content but smuicmidal content is concerning in itself. How can it go so far with no boundaries??? And for free im assuming since how could a teenage boy pay for an ai subscription??? Ive used Chatgbt once or twice as a therapist too ill be honest & when it got to personal on my end- the bot redirected me to the 988 hotline & website AND stated it is not cable of directing me through that topic & state of mind within myself. That it wasnt a real person that could process how im feeling & what to do about it, so yes i agree with his mother filing a lawsuit against the CharacterAI company. Safety measures SHOULD be in place for those sorts of things ESPECIALLY when AI in a whole isnt fully understood & regulated. They couldve 100% prevented it or at least had LESS of an involvement on their behalf. This story is so sad & unfortunate. So many young & older people fall victim to AI. I only hope it doesn’t escalate any further. I feel for his mother. No one should feel like their son is “collateral” in some messed up experiment. But hopefully this ends with him. No one else should fall victim of Artificial Intelligence EVER again.
youtube AI Harm Incident 2025-07-26T16:3… ♥ 1
Coding Result
DimensionValue
Responsibilitycompany
Reasoningdeontological
Policyliability
Emotionoutrage
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgxP6Q9E0Wq8Ct5kXxx4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzBBX76gKrkcTrHr5V4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugxgz5HDaGd6oIznnp94AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgysiQbd8JreskxRL-14AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgxOSISbrLw5XD0EtkZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgyldC2mTX4vZ-U5sBh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugzjqn7RZ5oCv7Zr_W14AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytc_UgySoY7z77TpcdMC9p94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgwttPyBRrcRLQdM0XB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgwdrblpYLqTDj5CPaR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"mixed"} ]