Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think the big problem is that these companies refuse to admit these machines are infallible. the fact it's a computer that is marketed as having all the answers is part of why it's an AI problem, you need to be really confident if you're going to use that kind of marketing shit like this won't happen
youtube AI Harm Incident 2026-01-16T05:4…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningdeontological
Policyliability
Emotionoutrage
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgxBqO0R8QuL8Ml_x5x4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzPs7737AsV0Bg0B054AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzcSdIJsa9cLxjBXeF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxovrNLRtUhcrXY1tR4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxqtUHXr7K2MssRzUx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugzeb3D813Za7H3OdRV4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgywaBVI4QJ6KmncnSl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwQAotZBA51NnHwqft4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"liability","emotion":"mixed"}, {"id":"ytc_Ugz1k7G6D3GCWVRzXNh4AaABAg","responsibility":"user","reasoning":"deontological","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgwQF739Oht0zaarH454AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"} ]