Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Sam Altman didn't mean that the kid was a 'stake', so we could start right there. But there's so much more. Japanese anime can be violent, and we could start there. But there's so much more. It'll be interesting how this plays out. And it'll probably be a payout where Open AI doesn't have to admit fault and the parents will take it. Kids aren't stupid. They know Chat isn't a real person better than most adults. It seems this kid may have let Chat take him where he wanted to go. This is all due to what in the AI arena is known as "the alignment problem," which, according to some AI insiders, is too late to fix before we hit and have superintelligence. With the "who gets to superintelligence first will run the world" problem, there simply isn't enough time to train AI to adopt "human life/living values." There isn't even agreement on whose or what those values should be. There are more people in the field who discredit even a need to build values into AI than those pushing for it. Still others who think AI can and will simply come up with its own work around, anyway. As E. Yudkowsky says [or quotes] "...and then it kills us."
youtube AI Harm Incident 2025-08-31T21:5…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningmixed
Policynone
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugy9tKzO5tp9gX_vHH94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugz2COvK2beRfK65bMV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzRTPVkmSrZPDA0V4p4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxjwfbbKClFgsRGd454AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgxNNndkYod_G9BdMRh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzIRM9iKHBAsPfbkb94AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz-MPwMBCkEDxeYSGR4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugykx7T_wmDH3o2TEPx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgxC6p5xSnQNQDEoxuJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgzuqiIghNwL45ZEI4B4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"} ]