Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think the key difference is that AI can now credibly imitate speech and cognition. Think of language as the source code of human thought. What you think and what you feel is impacted by the media you consume, the conversations you participate in, the things you learn, and the thoughts you think. While computers have always been better than us at tasks like calculation, they now have an increasingly effective way to influence our thinking, to edit our source code. Imagine there was an AI that is as much smarter than us as we are to a dog. Imagine all of the strategies and techniques we use to control the behavior and thinking of our dogs, strategies that a dog could never conceive of. A dog has no ability to understand or consciously resist Pavlovian training. It does't understand what makes a shock collar work. Or why it gets sleepy after it eats the peanut butter with the little white pill in it. A dog doesn't consciously understand why it does what we command or if they do, their understanding is very basic: "I want my owner's approval. I want my owner to reward me. I don't want my owner to punish me." We cannot conceive of how an intelligence that much smarter than us would be able to influence us. And in the grand scheme of things, we aren't even that much smarter than dogs. Imagine if the AI was as much smarter than us as we are to a mouse? To an ant? The larger that intelligence gap is, the less our control strategies would matter. So if we had an air-gapped AI, where all commands needed to be executed by a person... what would prevent the AI from using that intelligence to train and manipulate us just like we do our dogs? Hopefully, AI superintelligence isn't possible. Hopefully, there are physical or technical limitations that prevent the kind of exponential growth that researchers are worried about. But if not, then we're in a very precarious moment in human history.
youtube AI Governance 2025-10-16T05:2… ♥ 3
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytr_UgyOjzmTIoLZXJQ8_614AaABAg.AOLq_Q-3U1jAOMOxQavCEm","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytr_UgyOjzmTIoLZXJQ8_614AaABAg.AOLq_Q-3U1jAOMZQfG2nIu","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytr_UgxHMLXRr5iWJK8TPC14AaABAg.AOKcxEEl8x_AOL8M15bXkM","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytr_UgxBIFZaYUxl9uHUwgR4AaABAg.AOKakxbuc3SAOMoJcYULyg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"approval"}, {"id":"ytr_UgxBIFZaYUxl9uHUwgR4AaABAg.AOKakxbuc3SAOO6NOe7PSi","responsibility":"user","reasoning":"mixed","policy":"unclear","emotion":"outrage"}, {"id":"ytr_UgxBIFZaYUxl9uHUwgR4AaABAg.AOKakxbuc3SAOOyD59Zh8i","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytr_UgwnFgN90LTWyWMpQ1x4AaABAg.AOJlHYQMm7EAOLpvvvwYb0","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytr_Ugx0eO84iCVdGa-cKip4AaABAg.AOJau-ynNbIAOKRp7QKg86","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytr_Ugx0eO84iCVdGa-cKip4AaABAg.AOJau-ynNbIAOR8w2Bjz6b","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytr_Ugzt0860SlM1kjlnlQV4AaABAg.AOJZLkvQuUxAOLpkVsaMkt","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"} ]