Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
My fear about the "apocalyptic" consequences of AI aren't about super-intelligence, rather a sort of confederacy of statistical decision making dunces. As we integrate (are forced to integrate) these sort of amazing generators into our infrastructure we erode human decision making, wisdom, and responsibility. And while they are pretty great at searching, summarizing, and generating they are pretty bad at decision making ("pretty bad" doesn't even cover it. They are incapable of doing anything that we would consider decision making. That isn't what the technology does.) So now we have a scenario where an underpaid employee for whatever branch of the bureaucracy offloading work to a machine that is incapable of understanding or caring about anything and then implicitly trusting whatever statistically generated mess it spits out. We already see this happening. Policy built on AI slop. Like... the movie Wargames wasn't about a super-intelligent AI from the future that decided that humans needed to be put under thumb or whatever. It was about a robot that thought it was playing a game and because of a coding error was accidentally playing it with real nukes. And it doesn't have to be nukes... it can be some esoteric water policy in local government. Or diagnostic decisions that a stressed out Dr. makes. I don't fear The Matrix. I fear Wargames.
youtube AI Moral Status 2025-10-30T19:3… ♥ 80
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugz7To3N3bTqWHRXAWd4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugzg3My9h6MiHmdkDD54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgzS6P_qp6JJzzMBB394AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzLgdhp4_xZ5n82po54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytc_UgxMJlOHwQNVVDW5kz14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwMu7jkPZ781oZvapV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"}, {"id":"ytc_Ugxo6c3EvZkZGen8eaN4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugz0MG1VkiFCZxQxg794AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugx3nSuDFDjpcBaDBdF4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzUrlFSrmKEOxF9n-N4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"} ]