Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
If the wealthy simply adapt to make products only for the wealthy there isn't always a place to keep the poor around as entertainment. It might work in videogames but that doesn't translate to other industries like agriculture/food. It's simply more efficient to let poor people starve to death. When we look at solutions like UBI we investigate a lot about the behaviours of people who are receiving this money for doing nothing. But we don't really do the same for owners of all the robots. They are receiving billions of dollars for also doing nothing. In these sci fi AI autonomous economies, nobody actually does any work. That's the whole point of automation. We might not actually achieve such an economy but it's not a pipe dream that we may come close. So the big question is, if almost no work needs to be done by humans, why do we let the decision of who is allowed to live and who must die fall upon whoever owns a bunch of robots and computers? You want me to believe that I should starve to death because there isn't a piece of paper that says I am the owner of some robots? That's a moral catastrophe. One which I don't think most humans will accept. If that's what the system demands, it seems likely that the outcome will be some kind of social revolution. The might of the masses against the wealthy few. It's starting to sound like a familiar narrative - almost like some kind of predictable pattern. A flaw etched within the foundations of our economic system. Like Charlie Munger famously said; show me the incentive, and I will show you the outcome. The incentives of capitalism are to maximize shareholder value. The end game of that incentive is that everything belongs to the shareholder. Nothing is left for you. Ironically, not unlike the precautionary writings of Asimov, you need to be careful about the rules/instructions/incentives you provide to a system. Not just an AI system but an economic system too. Like the horror stories of AI over-optimising human happiness by eliminating all humans, capitalism can over-optimize for shareholder value by removing everyone else but said shareholder from the equation. In reality, it seems, AI isn't the one that will kill us. We will kill ourselves; AI will just be the instrument.
youtube AI Harm Incident 2024-07-28T21:5…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policyliability
Emotionresignation
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugx_ve3VZvPR4LEB4gh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugz0YPCu-_-siDm-b-B4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgzuNswLpT5ZLpRYLK14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"liability","emotion":"resignation"}, {"id":"ytc_Ugz-z6rZwBaWIlrDglV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy-lcQnkQ_SkjcLyCh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"ytc_Ugyhwvb-ZeCeB2XshWF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgydTn14A1Xfgq7xUlZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyFQnnkpFrg7LEDN6J4AaABAg","responsibility":"government","reasoning":"mixed","policy":"none","emotion":"fear"}, {"id":"ytc_UgxMEE-ubW7lBFA2V0h4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy-008YyjU-jRUBU4B4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"liability","emotion":"fear"} ]