Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Humans are dangerous, and we regulate them. We can really skip the ASI killing everyone part and just look at this from a power dynamic: How do we keep the superhero from doing supervillain things? And please stop using the "bad actor" stuff, because you're not talking about ASI (just the kind of tools we already have today). edit: Dean makes a valid (and I would say, very likely) point about big tech running to big gov to help them remain at the top of the tech tree when someone outside their influence builds the human-level AI first (probably by saying the economy will collapse if daddy gov doesn't steal the tech for them to "properly manage"). edit: ugh... instead of regulation no one knows how to draft, why not focus on the yadda yadda yadda part where something is created, and days later Earth is a ghost town. Nukes and gain of function research are at the top of the most likely to kill lots of humans list, so show me how software running on my computer is going to rocket past those risks.
youtube 2025-11-21T06:0…
Coding Result
DimensionValue
Responsibilityuser
Reasoningdeontological
Policyliability
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgwT7kNtEnbroo-TmBN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgyxY8jTl7gVshqg3hl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"}, {"id":"ytc_UgwZtlYjAycs5EqT-l94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxDyyFdGwGKiYxLHth4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgyLpLxuZqp_nffct6J4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugx7TKkEn1s5CnUk4D94AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"indifference"}, {"id":"ytc_UgwEi5TIp-1mMNTfe4l4AaABAg","responsibility":"none","reasoning":"mixed","policy":"regulate","emotion":"indifference"}, {"id":"ytc_UgxsN7wp7gCzc5-vked4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwV5WUhEIWthwExu8B4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzRnrgd51O3nE2NBGx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]