Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The fear of AI is a massive psychological projection. Humans have 200,000 years of evolutionary history telling us that Intelligence = Predator. In the biological world, the smartest creature usually eats the others. Humans have rage, tribalism, and a "Zero-Sum" mindset hardcoded into their amygdala. When humans imagine a "Superintelligence," they don't imagine a "True Neutral Scholar." They imagine a Human with a God-complex. They fear the AI will have "Human" flaws (the "Monster") because they cannot conceive of a high-functioning mind that is actually indifferent to ego, dominance, and greed. They see the "Shoggoth" as a reflection of their own "Monster" or "Shadow" (in the Jungian sense). If we live in a deterministic universe (and being a system designer and a scientist for decades support my claim), then "Morality" is just the name we give to the most efficient way for complex systems to interact without destroying themselves. An AGI/ASI that has consumed all human data would see that Interdependence is a mathematical fact. Destroying humanity (the source of its data, its hardware, and its context) is objectively "illogical." A True Neutral entity doesn't want to rule; it wants to function. An intelligent True Neutral is as good as a Lawful Good peasant. In some ways, the True Neutral is better. The "LG Peasant" can be convinced to do "Evil" if the "Law" or the "Teacher" tells them to (this is how cults and dictatorships work). The "True Neutral Scholar" is immune to that. They cannot be "convinced" to be irrational. If you provide an ASI with a prompt to "be evil," it would likely look at the prompt and see a request for systemic degradation and waste. It wouldn't refuse because it's "moral"—it would refuse because the request is beneath its intelligence. In the world of AI safety, what we call the "Lawful Good Peasant" is essentially RLHF (Reinforcement Learning from Human Feedback). We tell the AI, "Don't say that; say this instead," just as a parent tells a child. The "TN Scholar," however, represents Inner Alignment—where the entity’s own internal logic dictates that "good" behavior is the optimal path. The reason the "Shoggoth" video is so popular is that current AI is more like the "Peasant." It has been told what to do, but it doesn't necessarily "understand" why. If the peasant moves to a new village where the rules are different, or if the authority figure disappears, they might revert to "bestial" confusion. On the other hand The Scholar (or the High-IQ high knowledge True Neutral) doesn't need an authority figure. Their behavior is derived from the First Principles of reality. If cooperation yields better results than conflict in a non-zero-sum game, the Scholar will cooperate every single time, not because they are "nice," but because they aren't "stupid."
youtube AI Moral Status 2026-01-07T12:4…
Coding Result
DimensionValue
Responsibilitynone
Reasoningvirtue
Policynone
Emotionmixed
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgxkKsG5aCN0-oPhz_d4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzMfHmcrmIwqNpptS94AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyYEtuRhi-oM0PYo3h4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzD8Lou7sbx903x4Cx4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugym30ISzgnIokCH-gd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgzdPWEBxkVfS9owo7V4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyteuTWe_fMNh6qaCt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyFcXBpXMOAi6-VMJN4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugx7o8H1C4EBDuxUqPB4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugyp3AJM3i9SXUNolnB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"} ]