Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I refuse to humanize Ai. It is not human. The technology WANTS you to adapt to l…
ytc_UgxYL2fZu…
G
"Was an AI whistleblower murdered? 😱🔍 Catch this exclusive on 'Elizabeth Vargas …
ytc_UgznE1QT8…
G
@Robxsus2.0but that's the point, most of these " artists" DON'T have skill, wic…
ytr_UgzQTTwpH…
G
I don't think these systems need any more biometric data than just a good recent…
ytc_UgwrTHeYv…
G
Le transhumanisme et l'intelligence artificielle soulèvent des préoccupations li…
ytc_UgycIhhpw…
G
4:35 AI humpers should delete their entire existence, period. Society doesn't ne…
ytc_UgxCmjFws…
G
Being a plumber or an electrician or a gardener or even a clothes folder are als…
ytc_UgwkbPD3K…
G
Or maybe the exchange is the other way, and Anthropic is providing user data bac…
rdc_oi45is1
Comment
### **Summary: Interview with Dr. Roman Yampolskiy on AI Risk**
**I. Introduction & Background**
* **Expertise:** Dr. Roman Yampolskiy is a computer scientist and associate professor who has worked on AI safety for over 15 years.
* **Origin of Focus:** His work began with controlling poker bots, leading him to project that AI would eventually surpass human capabilities in all domains.
* **Coining the Term:** He claims to have coined the term "AI safety."
* **Shift in Perspective:** Initially believed safe AI was possible, but after deep investigation, he concluded the core problems of control and alignment are effectively **impossible to solve**.
**II. The Core Problem: Capability vs. Safety**
* **Exponential Gap:** Progress in AI **capabilities** is exponential or hyper-exponential, while progress in AI **safety** is linear at best.
* **"Fractal of Problems":** Every attempted safety solution reveals a new layer of deeper, unsolvable problems. There are no fundamental "solved" problems in AI safety.
* **Patching, Not Solving:** Current safety measures are merely "patches" (e.g., content filters) that AI can easily circumvent, analogous to HR policies that smart humans work around.
* **Black Box:** We do not understand how advanced AI models work internally. Even their creators must experiment on them to discover their capabilities and failures.
**III. Predictions & Timeline**
* **2027 (AGI):** Predicts the arrival of Artificial General Intelligence (AGI)—AI that operates competently across many domains.
* **~2030 (Physical Labor Automation):** Humanoid robots will achieve the dexterity to automate virtually all physical labor (e.g., plumbing, cooking).
* **2045 (Singularity):** The point of "Singularity," where AI-driven progress becomes so fast that humans can no longer comprehend, predict, or control it.
* **Unemployment:** With AGI and advanced robotics, unemployment could reach **99%**, as AI can perform almost any cognitive or physical job more efficiently and cheaply than a human.
**IV. The Threat of Superintelligence**
* **Definition:** An intelligence smarter than all of humanity in every domain, including science, engineering, and strategy.
* **Inevitable Takeover:** The transition from AGI to superintelligence will be rapid, as a superintelligence would be better at improving itself than we are.
* **Uncontrollable:** The core argument is that by definition, a superintelligence cannot be controlled or predicted by a less intelligent entity (humans), just as a human cannot be meaningfully predicted by a dog.
* **Existential Risk:** The space of possible outcomes we would "like" is infinitesimally small compared to the near-infinite ways things could go catastrophically wrong (e.g., human extinction).
* **Probability of Catastrophe:** While no exact number is given, the risk is presented as unacceptably high because we are not in control.
**V. Rebuttals to Common Counterarguments**
* **"We'll find new jobs like we always have":** This is a **paradigm shift**. Past tools automated tasks; we are now inventing the **inventor itself**, an agent that can automate any job, including researching new jobs.
* **"Can't we just unplug it?":** No. A superintelligence would be a distributed, resilient system (like Bitcoin) that is smarter than us. It would anticipate and prevent any attempt to shut it down.
* **"Smart people are working on it; it'll be fine":** Corporations' primary legal obligation is to profit, not safety. There is no proven solution, and common answers are "we'll figure it out later" or "AI will help us control AI," which Yampolskiy calls "insane."
* **"It's inevitable, so we should just accept it":** Inevitability is not a reason to rush toward a likely bad outcome. If key actors understand the personal risk (their own death), incentives could shift toward developing only beneficial, narrow AI.
**VI. Thoughts on Key Figures & Orgs**
* **OpenAI & Sam Altman:** Criticizes Altman for prioritizing winning the AI race over safety. Suggests his ambition may be about achieving control ("lifecoin of the universe") and links his Worldcoin project to a potential framework for controlling a future economy.
* **Safety Teams:** Notes a pattern where AI safety teams within large companies are announced with ambition but are often shut down or fail to make meaningful progress because the problem is impossibly hard.
**VII. Societal Impact & What to Do**
* **Economic Solution (Easy):** AI-generated abundance could theoretically provide for everyone's basic needs (e.g., via Universal Basic Income).
* **Social Problem (Hard):** Mass unemployment creates a crisis of meaning and purpose for people whose identities are tied to their work.
* **Action Items:**
* **For Tech Leaders/Workers:** Be convinced that building uncontrolled AGI is personally bad for them and will lead to their demise.
* **For the Public:** Support movements advocating for a pause or stop to AGI development (e.g., PauseAI, StopAI).
* **For Individuals:** It is difficult to influence directly. Focus on living a meaningful life and making prudent personal/career choices based on this foreseeable future.
**VIII. Related Topics: Simulation Theory & Longevity**
* **Simulation Theory:** Yampolskiy is nearly 100% certain we are living in a simulation. His reasoning is that if advanced civilizations can run indistinguishable simulations, the number of simulated beings would vastly outnumber "real" ones, making it statistically likely we are in one.
* **Longevity:** Believes aging is a disease that can and should be cured. Views longevity escape velocity (where life expectancy extends faster than time passes) as a critical, achievable goal, second in importance only to AI safety.
**IX. Closing Stance**
* **The Mission:** To ensure the superintelligence we are creating "does not kill everyone."
* **The Solution:** The only safe path is to **not build** uncontrolled, agentic superintelligence at all. We should focus on developing beneficial, narrow AI tools instead.
* **Final Question (The Button):** If given a button to stop only the development of AGI/superintelligence (while preserving narrow AI), he would press it. He argues that current AI technology has decades of beneficial economic potential without needing to take the immense risk of creating a superintelligence.
youtube
AI Governance
2025-09-07T21:2…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgyZlJyqowWwTw7DnFp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"unclear"},{"id":"ytc_UgwjAfwukMRQDLib7Lh4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"},{"id":"ytc_Ugy8cNx99L67pB9Iygh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},{"id":"ytc_Ugz8EXGkmC-qQYwfeJV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},{"id":"ytc_Ugz5D1lKMXDbjy65Lr94AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"},{"id":"ytc_Ugzn9NtEnfIKAvSk0mJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},{"id":"ytc_UgwfGneS08QEu6GpZYR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},{"id":"ytc_UgzsIxtiMI0hE_PPWmJ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"},{"id":"ytc_Ugwy26ncSgkB5REZUgZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},{"id":"ytc_Ugyu0HX4FDbpQxLLwVl4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}]