Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
In my estimation the reason people fall on the side of denialism is that they simply don't want to believe that the existential threat it real. This guy clearly hasn't taken the arguments seriously enough to have the "Oh sh*t" moment. Case in point @23:06. I can understand if the average person doesn't want to consider the real possibility that the world might end, but people writing AI policy? Inexcusable.
youtube 2025-11-21T12:5… ♥ 24
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionoutrage
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgwT7kNtEnbroo-TmBN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgyxY8jTl7gVshqg3hl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"}, {"id":"ytc_UgwZtlYjAycs5EqT-l94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxDyyFdGwGKiYxLHth4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgyLpLxuZqp_nffct6J4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugx7TKkEn1s5CnUk4D94AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"indifference"}, {"id":"ytc_UgwEi5TIp-1mMNTfe4l4AaABAg","responsibility":"none","reasoning":"mixed","policy":"regulate","emotion":"indifference"}, {"id":"ytc_UgxsN7wp7gCzc5-vked4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwV5WUhEIWthwExu8B4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzRnrgd51O3nE2NBGx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]