Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I could be wrong but this guy strikes me as a very smart idiot. Obviously he knows a lot more about AI than most people, especially me. Then he has just came to realization that AI is dangerous in the last few years. Has he never had any experience reading or watching science fiction? This isn’t exactly a new idea. Also, if he thinks Canadian banks are a super safe place for you to keep your money he doesn’t know much about the Canadian Liberal government… trust me I’m Canadian. He understands conformation bias and explains it well yet sounds like he has drank the Koolaid pitcher dry. 11:32 global government what could go wrong? 1:01:20 “We thought we were made in the image of God” so obviously he’s Atheist which helps explain how he sees humanity. He seems like the kind of guy who doesn’t understand the idea of “If it ain’t broke don’t fix it” until it’s too late. Yeah, you can help doctors see more people… but everyone is depressed, sick and dying more so now due partly from your actions. Sometimes I think the best thing that could happen to humanity is decrease our dependence on the internet and computers by 99%. It will never happen we’re in too deep now and most people wouldn’t want to anyway.
youtube AI Governance 2025-07-21T10:5… ♥ 1
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningvirtue
Policyliability
Emotionoutrage
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgzZmZVLzPKSHYdj_3B4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgxsP8V7xEzbGI4oNAd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzrDd6MA9XD5JxfG6d4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyItGTyvstz8J-A74l4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_UgytDp18mJgjbSCbg2V4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxLOXl7cIrlRzX1Z614AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzRgDTReIsEpa5ujxt4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugx9omvue6fyzc_TMoZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"frustration"}, {"id":"ytc_UgwE2i83MncddjfncSl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgwTNefw0msQmBaJDz94AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"mixed"} ]