Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Commenting before watching, because this is an evergreen relevant comment about AI advice like this: In engineering, we have a weird and counter-intuitive concept: "almost perfect" is worse than "pretty good". The reason for this is because if something is _almost perfect_ , you can get complacent, assume it'll just always work, and get taken by surprise when it breaks, but if it's _pretty good_ , then you _expect_ it to be wrong in some way, so you pay closer attention, you double check it, and you generally confirm that it's not broken. AI advice is in the "almost perfect" category right now, and that makes it extremely dangerous, because that makes it so that people follow it _blindly_. Always remember: trust, but verify. Don't just assume the AI is correct. It's frequently not.
youtube AI Harm Incident 2025-11-24T23:5…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionapproval
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[{"id":"ytc_Ugx46HsdO5vB3f3on0h4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwfH-pFbFfS4mB2aDh4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwjngIgVcdaWcn8-aJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwORH6fT1daDN0207V4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugxz6f_Kiag-g-7EInp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwooL8oW3IFRvo7QXl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwZe4AzSx1e5hOwZK94AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"indifference"}, {"id":"ytc_UgyKpQ0-yopz0ZFUWqR4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyVOVcdoXAtM06Ro3x4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgwZb5tB5jvL0bdi0YB4AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"outrage"}]