Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Idk why Tesla-stans try to argue with me that radar equipment designed for sensi…
ytc_UgzXAmPV6…
G
I remember reading a story about a scientist who was teaching an African Grey pa…
ytc_UgwuuqGy_…
G
As an aspiring artist and writer with severe anxiety, LavenderTowne has become p…
ytc_UgzbeWzSF…
G
It's crazy how much the YouTube algorithm seems to work against promoting your c…
ytc_UgyIr6qzE…
G
one of my friends made an arg and they used ai to make the art (it was very intr…
ytc_Ugw1sP5oW…
G
I hate your analogy. I hate AI, but i do find machine assisted speed runs to be …
ytc_Ugw5meOTT…
G
If we make the ai robots barely smarter than us, and model them after our behavi…
ytc_Ugx8N2Ckl…
G
I did not come to AI images out of curiosity. I came to it out of rejection. Rej…
ytc_UgxRhZ0xm…
Comment
Dr. Roman Yampolskiy AI Safety Interview Analysis by Bob Roman
September 4, 2025
INTRODUCTION
Dr. Roman Yampolskiy, Professor of Computer Science at the University of Louisville and founder of the term AI safety, presented a stark warning about artificial intelligence during his interview with Steven Bartlett. With over 15 years of research in AI control and 100+ publications, Yampolskiy assigns a 99.9% probability that superintelligent AI will destroy human civilization. His analysis covers critical timelines, employment disruption, technical safety challenges, and industry practices that he believes are gambling with humanity's survival.
THREE MAJOR TAKEAWAYS
1. UNPRECEDENTED TIMELINE OF AI DOMINANCE
Yampolskiy predicts Artificial General Intelligence (AGI) will arrive by 2027, with humanoid robots capable of all physical labor by 2030, leading to a technological singularity by 2045. Unlike previous technological revolutions that automated specific tasks, AI represents the automation of intelligence itself - creating agents that can perform any cognitive or physical job. This will result in 99% unemployment, not gradual job displacement, because AI systems will be capable of doing everything humans can do, but better, faster, and cheaper. The economic abundance created by free AI labor may solve material needs, but will create an existential crisis of human purpose and meaning.
2. TECHNICAL IMPOSSIBILITY OF AI CONTROL
The fundamental problem is that controlling superintelligent AI indefinitely is mathematically impossible, comparable to building a perpetual motion machine. Current AI systems are already unsafe - every large language model has been successfully jailbroken and made to perform unintended actions. As AI capabilities advance exponentially while safety measures progress linearly, the control gap widens dangerously. Modern AI systems are black boxes that even their creators don't fully understand, requiring experiments to discover their capabilities. Once superintelligence emerges, humans will have no second chances to implement safety measures, unlike cybersecurity where you can change passwords after breaches.
3. INDUSTRY LEADERSHIP PRIORITIZING SPEED OVER SAFETY
Dr. Yampolskiy strongly criticizes OpenAI and Sam Altman for violating established AI safety guidelines and gambling 8 billion lives on achieving superintelligence first. He notes that industry leaders privately acknowledge 20-30% extinction risks while publicly minimizing dangers. Companies create AI safety teams that are quickly disbanded when they inconvenience rapid development timelines. The competitive race between nations and corporations incentivizes cutting corners on safety research, with leaders more focused on world dominance and historical legacy than human survival.
THREE CALLS TO ACTION
1. DEMAND ACCOUNTABILITY FROM AI COMPANIES
Challenge industry leaders to publish peer-reviewed papers explaining exactly how they plan to control superintelligent systems before deployment. Stop accepting vague promises about solving safety later. Companies valued at billions claiming to work on safe superintelligence should be required to demonstrate their safety methods publicly. Support organizations like PauseAI and StopAI that are building momentum for democratic oversight of AI development. Contact your representatives to demand that AI development proceed only with explicit public consent and safety guarantees.
2. FOCUS INVESTMENT AND DEVELOPMENT ON NARROW AI
Redirect resources toward narrow AI systems designed for specific beneficial purposes like medical research, climate solutions, and scientific discovery, rather than general superintelligence. These targeted systems can provide enormous economic value and solve critical problems without the existential risks of autonomous agents. Support companies and research that prioritize human-controlled tools over independent AI agents. Recognize that 99% of current AI's economic potential remains undeployed, meaning we can achieve tremendous progress without rushing toward superintelligence.
3. EDUCATE YOURSELF AND OTHERS ON AI EXISTENTIAL RISKS
Read authoritative sources on AI safety including Yampolskiy's books and research from recognized experts in the field. Understand the difference between narrow AI tools and autonomous AI agents, and why superintelligence represents humanity's final technological challenge. Share this knowledge with your network, especially those in positions of influence. Recognize that this issue affects every person on Earth, yet most remain unaware of the timeline and stakes involved. Time is critically short - these conversations need to happen now while humans still have agency in the outcome.
Bob Roman, President and Founder, Acts4AI
Email: Bob@acts4ai.com
Website: www.acts4ai.com
Linkedin: https://www.linkedin.com/in/romanbob
"I will give you the shovel to mine for the gold that can be found in A.I. safely, legally, morally, and in a God glorifying way!"
youtube
AI Governance
2025-09-04T09:4…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyqMtqDqQ8TTl8i1o94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugwo1hXQg3-tIk1QQkB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzHRjc-Kn6cqQdMiNJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgxRYwYUs6pfD-xC1i94AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyrwcUyLljH17kVpdh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwjQTQ48zJa2ilQ9nt4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgypulxmlncQPzblQOp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwQXOHqRa1QHyuUwnp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzGuKkZPcJhEkh93Px4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugw1g7yObbs5nzRpHZ54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]