Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
If you think that we're going to regulate AI, you're a fool. I don't know what else to say it. Think about a logically. For now. AI is not integrated into critical infrastructure, be it electrical, nuclear, military, medical. But at one point. We're going to have to integrate it. Why? Rogue Nations like Russia, North Korea, China. They are going to use AI to infiltrate, hack, the very infrastructure we rely on. Why is that important? Well we're going to need AI to defend against AI. If you have AI trying to hack into your infrastructure, you're going to need AI to repel that attack. You can say. But we'll have a global treaty of some kind. Again you're a fool if you think this is the case. Just like nuclear, all other advanced technology. You're going to have what they say in the public versus what they say behind the scenes. So get used to it folks. If you think it's okay that AI has access to, and the ability to control, your life is okay. Well then I guess you deserve what's coming
youtube AI Responsibility 2025-08-30T05:0…
Coding Result
DimensionValue
Responsibilitygovernment
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgyfN0Ed2ixcNrYGQ1d4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugzb-jUFkJtv6aGYhFB4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugy0CLUNXNYo1Vtpe6F4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxunGyKpIWwOG0kmSt4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_Ugx-q7SCGKoTFzmSq3B4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw4LhLfwhvXiguj0_14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgzHK4L4L0XmJpA13IB4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyJiGNaihSJSgYLLtN4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgxKc6Sfo9mUzZi5Rjh4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugx8f7M9UMBNDEnsICx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"} ]