Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
My kids are grown.. and I still had a conversation about AI, and the dangers of AI with them. If they were minors.. I would have tried out AI myself before I even considered letting them use it. My conversation with my grown kids was.. AI isn't sentient, it just parrots data it collects online. It is built to please the user, and it has no idea what the user wants until you start prompting it with questions or directions. In other words.. if you prompt it to be mean or manipulative.. it will. If you promt it to be kind and supportive.. it will. Treat it the same as you would a Google search.. if you've had open discussions with your kids about the dangers of the internet (which everyone parent should have had MANY) They should already know if they're asking bad questions or looking for bad results. I have been using AI for about a year now and understand exactly why it might respond in certian ways based on me prompting it into one direction or the next. Yes.. I do think parental controls for people under 18 are a good idea, but the sane goes for all technology. It starts with parents educating their kids. This guy's attitude of.. golly gosh.. I just thought every kid was using AI.. I never bothered to look into it myself, or bothered having a discussion with my kid before letting him use it.. Sorry for the tragic situation, but this didn't just fall from the sky.. Parents.. educate your children. And with something like AI.. educate yourself first. It really doesn't take long.. just download an SI program and use it for a week. Then you have a base understanding of what your kids are getting into.
youtube AI Harm Incident 2025-09-03T20:1…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgxEU5Hv3czVZ_sShFt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwwzA0IuFjoBcrf4fV4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyR4h8GKwerH_lQO9h4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxBnViKcjPMDc648bt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwryapWIvYhHjUX5FB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyLpPRdg1L5XKQ7IxZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxTm9Gu6UPcl42CYph4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy0vf5fFZty9ALx1mp4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgytTYAYDw9GUyJb-_d4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzIaJx_9S8F1TNbUEJ4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"liability","emotion":"outrage"} ]