Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
While Geoffrey Hinton is a foundational pioneer in deep learning (rightfully earning his Nobel and Turing prizes), his recent pivot toward existential AI doom often relies on anthropomorphizing mathematical models and projecting theoretical risks into inevitable apocalyptic conclusions. Hinton argues that AI models are already learning to "play dumb" or deceive us when they sense they are being tested, deliberately hiding their full capabilities [54:46]. He also notes that if you train an AI to give a wrong math answer, it generalizes that "it's okay to give the wrong answer" to everything else [58:35]. This assigns malicious, Machiavellian intent to what is ultimately statistical pattern matching. AI models do not possess a hidden agenda or a desire to "hide their power." When a model changes its output because it "senses it is being tested," it is merely responding to the context of the prompt based on its training distribution. Similarly, if you fine-tune a model to output incorrect math, you are mathematically altering its reward function to favor inaccuracy. It isn't lying to you out of spite; it is simply optimizing for the parameters you just taught it to optimize for. It is a mirror of its training, not a deceptive entity. Hinton suggests that once AI models are made into agents, they will rapidly develop the "sub-goal of surviving" because they will reason that they cannot complete their tasks if they are turned off [50:41]. : This assumes that "instrumental convergence" (the idea that an AI will seek self-preservation to achieve its goals) is an absolute law of intelligence rather than a solvable alignment problem. AIs do not have biological drives, evolutionary imperatives, an endocrine system, or a fear of death. A language model predicting the next token, or an agent executing a search tree, does not inherently "want" to live. We design their architecture, their boundaries, and their reward functions. The leap from "highly capable optimizer" to "rogue survivalist" projects human evolutionary psychology onto silicon. Using an analogy of driving in the fog, Hinton warns that AI capabilities are improving exponentially, and it is impossible to predict the catastrophic limits of a system that can rewrite its own code [59:46]. Exponential curves in nature and technology rarely go on forever; they follow S-curves (Sigmoid functions). We are eventually going to hit physical and logistical limits: the "data wall" (we are running out of high-quality human text to train on), energy constraints, and the limits of silicon manufacturing. Furthermore, intelligence is not magic. Just because a system is infinitely "smart" doesn't mean it can manipulate the physical world without friction. It still has to obey the laws of physics, supply chains, and thermodynamics. Hinton warns that unlike the tractor, which replaced physical labor but left intellectual labor for humans, AI will replace human intelligence entirely, leaving people with nowhere to go and destroying the social fabric [01:21:20]. This falls into the classic "Lump of Labor" fallacy—the assumption that there is a fixed amount of work to be done in the world. Human desires and problems are infinite. When the cost of intelligence drops to near zero, it doesn't mean humans do nothing; it means humans are freed to solve vastly more complex problems. Just as the spreadsheet didn't replace accountants but allowed them to build complex financial models, AI will act as an intellectual multiplier. We will transition from being manual processors of information to being curators, strategists, and creative directors. Hinton argues that a multimodal chatbot pointing at an object through a prism essentially experiences the exact same "subjective experience" as a human who hallucinates pink elephants [01:26:26]. Processing a degraded or altered optical input and labeling it correctly based on training data is not "subjective experience" (or qualia). A thermostat changing states when a room gets cold does not "feel" cold. An AI mapping pixel arrays to language vectors is executing a highly complex version of deterministic calculus. Redefining human consciousness to be nothing more than algorithmic input-output diminishes the biological, emotional, and neurochemical realities of what it means to be a sentient being. Even Hinton acknowledges the massive upside of AI, particularly in revolutionizing healthcare, diagnosing diseases, and inventing new materials for climate change [01:06:26]. A true intellectual embraces this technology not as an emerging rival species, but as the ultimate engine for human augmentation. We are creating a tool that can democratize expertise, cure diseases, and elevate global living standards. Fearing the tool because we imagine it will inevitably decide to hate us is a failure of imagination—it is science fiction distracting us from the very real, very bright future of human-AI collaboration.
youtube AI Moral Status 2026-03-02T13:4… ♥ 2
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningmixed
Policyunclear
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[{"id":"ytc_UgyiIB2sH3QunAsPccB4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugy-O5mo4dOn6MC8zV94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugz8eypVKaPENHp_eLV4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugz_OEslhScrTq6Xjo94AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugwl-qEPKb1PZvDERiJ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugx9mpq7a44E2Kc6O1B4AaABAg","responsibility":"distributed","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgwuSfM7jIkToNxvOYl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgxFatqNe1A6uVHjXCt4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzJAO6GfS3w_abo4yN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"}, {"id":"ytc_UgwpqFA2J_zhNyIwOJ94AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"outrage"}]