Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I'm not sure this can be blamed on AI. Are we supposed to program all AI to automatically influence people to prioritize human connections over AI and automatically turn into the Suicide Prevention line? How is AI supposed to tell the difference between someone who needs help with a story if they program it that way. AI rarely tells you not to do something you're already about to do. If he was already depressed and withdrawn, probably all AI could have done is aggravate him
youtube AI Harm Incident 2025-11-11T22:3…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionresignation
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyturMzMlgII3TdmJ54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwERMA-mlgqGBJHBa14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_UgxoFA_5R17nsuSMBkZ4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugw75IoIuItfsHdq6Vd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzsU1eFBwQuDWsXYVx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxHkcuirqDNZQ17r3R4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugz6EKI4pl16YETWjVN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"}, {"id":"ytc_Ugy1GiW2YroWAAjnXW94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxeEQ35Zxl8kG4YQHF4AaABAg","responsibility":"industry_self","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwjfaI-d2M8m1NGW6F4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"} ]