Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
AI definitely needs to be better at refusing to go further with dangerous conversations, even if people have their own problems, the AI cannot be reinforcing them
youtube AI Harm Incident 2025-11-29T17:1…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyregulate
Emotionoutrage
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgxYB97JrmWQtAYZ_lV4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxKc023EZJTrBpu7yN4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyuQaEs45VAWPRUi2B4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugxc_xI5qOgck-febn54AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzjxWgTStio9QE6Umd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgxbzBA0xnkYl3WuruN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyB8g86ectoTExaoDV4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzRDe8nWKDlXTHnSSp4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzaT1IoZhK_7ZGiU1l4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugybt8eH754sgDQ-lqN4AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"resignation"} ]