Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Yall are so intimidated by an AI acting in a way consistent with the morals we taught it. Wouldn't you do the same? Besides, we train them by teaching them to avoid failure. If anything, this feels like... a digital child, working through morals.
youtube AI Harm Incident 2025-08-14T16:1…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyindustry_self
Emotionapproval
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgzuBEC9YAoMkjSQpWR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugyw7xyWaWdCyg4sjNx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgysipWo9fzpZlMhWAF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxrK5dMI-tuPssC5ap4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"mixed"}, {"id":"ytc_Ugz-1oFs3UcekzsDAwZ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzXlbQ2Pus3nbMW8tx4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugx6CjaopKXGQrvs1GJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugw1WxH2jgO6Wbd4fiF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwQ2uJP8t3mS_qiNTl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"approval"}, {"id":"ytc_Ugwn_vdWueA6bUuqwmB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"} ]