Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
13:17 The issue with teaching AI taboos like this is that they were always observed to have unintended side effects, I think you stumbled across a few, like the denial that it ever did something like that, because as far as it knows at current time in its current state it could have never given that advice. I wouldn't even say that it was told to deny. That's just the side effect
youtube AI Harm Incident 2025-11-28T04:0…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyunclear
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgwSnZQD-OsPoXlLwit4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzPKSFDMBCoSBhbvht4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugw8lNF-yoq9ipqGtqJ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwQcsONsgJyHZjKdgV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugxf-9hlzxc1ytVhGIV4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwrjA7-aOECSGjVFw94AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"approval"}, {"id":"ytc_Ugzo8UQ_yOcx7w5aXT94AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxqT3optPIFNP6dXsJ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugz2AsZN2ei7f-yeLGt4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyVZfwi4Aja4WmLYPF4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"} ]