Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Great interview, very scary,but I believe Professor Russell's focus on a centralized 'cabal' is missing the central driver of AGI deployment: Unfettered Capitalism. While the risk is real, the future won't be defined by 6 people hoarding the technology, but by selling it to every person and company that can afford it. 1. The Market Will Decenter the Machine (A Rejection of Monopoly) I strongly disagree that a handful of people will control all AI or AGI robots. Companies are driven by profit, and AGI represents the biggest profit opportunity in history. * The Elon Musk Model: We've seen figures like Elon Musk state they want to produce billions of humanoid robots (Optimus) and sell them affordably (projected $20k-$30k). Musk has explicitly made it clear he wants to sell this technology as widely as possible, predicting it will eventually account for 80% of Tesla's value. You don't get a $10+ trillion valuation by keeping it locked up; you get it by achieving mass distribution. * Widespread Use: This means AI will not be confined to secret labs. I see AGI being used in homes as personal assistance—as a maid for chores, a doctor for basic diagnostics, and even for child care. Hospitals will own thousands, the mega-electric company will buy 1,000, and my small electrical business will buy 20. The principle is the same as any other product: companies will get richer, and they will pay the makers of the bot. 2. The Real Catastrophe: Mass Economic Collapse The issue isn't that nothing changes; it's that the economic divide will widen exponentially, creating a humanitarian crisis far faster than any Skynet scenario. * The Labor Apocalypse: The people who rely on traditional employment for their income will be devastated. AGI/Robots will be cheaper, faster, and work 24/7, making human labor redundant across the board. The working class will be "fucked." * The Idea-Capital Class Wins: Individuals with "gen brains"—entrepreneurs, innovators, and creators—will be fine. They will simply use the automated labor force provided by the bots to accomplish their goals and ideas at an unprecedented scale. * The Failed Subsidy Trap: The state will be forced to implement universal income or massive subsidies for the displaced, turning the US (and other nations) into a society where the poor get poorer, and the state's main function is distributing scarcity. We saw how systems like the Cold War-era USSR failed; this model destroys wealth and purpose. 3. A Call for Global Guardrails (The Chernobyl Principle) I share Russell’s concern about the AGI surpassing us (the "gorilla problem"), but I have faith in humanity's ability to act when the alarm is sounded loud enough. * We can and should be able to implement limiting strategies or "switches." If a crisis like the Chernobyl nuclear disaster can trigger global regulatory reform and a unified response, the existential risk of AGI can too. * The key is to align its objectives and ensure the machine knows humans are the main priority/key—not just for our survival, but for its core function. We must band together now to implement that strategy. The threat isn't a digital dictator, but a market-driven, decentralized economic collapse that precedes the loss of control.
youtube AI Governance 2025-12-05T23:0…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_Ugw2gwt2wtIDpWAdYmh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzlNicjP9oLwrQcufV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwgokjebpqTJ3LbgZB4AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugyi4u9DZVz67FKtxM54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxlQh0gxU9UQWqGFGR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxMhR0PaTWCXwUmJUN4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_UgxL8c_NHmXKZnDfRSp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgxzIsjX4J3eI1BtgJh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugz1XAfFy6YgrjXknOV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgxofVkH1qe_IlAqEiN4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"}]