Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Leaving aside the consciousness or "intelligence" of LLMs (which, by my understanding of computation and biology, structurally cannot be conscious or even really have "thoughts," for several reasons - lots of mystification and anthropomorphization going around) for the moment, I still don't really buy into the premise that an artificial "superintelligence" is inherently dangerous. People have varying intelligence in varying fields that shifts over time, and it's very clear from human societies that differences in these things don't inevitably lead to the "most intelligent" ruling over everyone else or fully controlling society in some way. Those gaps, while malleable (and generally easier to close than widen), can be pretty wide at the extremes. The idea of a "super-high-IQ" individual taking over the world with their gargantuan brain skills is a naive extension of an already naive and self-aggrandizing mythology of meritocracy. Specifically the kind that's highly prevalent among the types of privileged people spending billions of dollars they don't have to boil water in the desert with the waste heat from LLM slop generation. There's also no physical basis for the types of things these apocalypse scenarios often implicitly assume about a "superintelligent AI's" capabilities, which usually fall somewhere between "something from the Xeelee Sequence" and "literally god," always with a general disregard for practical problems like... the laws of thermodynamics. I think a ton of this is also just an expression of fear on the part of the people making these LLMs that they won't be able to control (and thus profit from) them fully and exclusively... and that is a battle I think they've already lost. But to be more hopeful here, there are some universal things that exist as common ground. The utility of truth (in the sense of empirical reliability) in predicting and planning doesn't change, no matter how alien your motives or how dishonest you're willing to be with others. From a computational efficiency perspective, as well as resource utilization, cooperation is always more efficient than resource expenditure on destructive conflict, and dishonesty is destructive conflict in the information space itself. Many other intelligent beings existing and cooperating, but possessing independent thought and agency, gives the whole system both massive parallelization and the ability to generate new angles to solve intractable problems in a way that a small number of "superintelligences" can't efficiently emulate without... well, emulating the independent perspectives that you get for very little energy expenditure with living humans. And there is _already_ a broader range of common ground vs. lack thereof in mundane human interactions than I think some of the voices in this space tend to acknowledge. The sample size of types of sapient intelligence that humans interact with today is closer to 8 billion than it is to 1. ...Also, maybe we shouldn't immediately assume that the first thing a superhuman intelligence is going to do is maximally competitive hypercapitalist self-actualization of whatever its motives are, at the existential expense of everyone and everything around it? Sure is interesting that the industry seems so adamant about that being a realistic or even plausible outcome... almost like they're projecting the mindset they already have onto something they call "superintelligence." I wonder why they might do that. Couldn't be ego, surely.
youtube AI Moral Status 2025-10-31T21:4…
Coding Result
DimensionValue
Responsibilitynone
Reasoningdeontological
Policynone
Emotionresignation
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxDlAQpJvFbgGM4r4N4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyjUsv4wUBOyvwEwRd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugwaha5FvqKpPn5hTL14AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzQ7alvMqtC2j7XWyN4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyBNlFBAtH6vnIH_7F4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxD7d65FwNleHg6ndh4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxdJNz5J6OmXBHtUHR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugzc0TaJYKf3z5Gpc6F4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwDSWrHCmEbQb6BGxp4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugzeml3bGYbm1250Kup4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"} ]