Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Can anyone explain why he thinks we’re close to AGI? LLMs are far away from that…
ytc_UgzyQrLss…
G
"Wym the robot drove the cybertruck before me..!" I can just hear it now 😂…
ytc_UgxGp0auv…
G
The rich & powerful will continue to abuse AI for power, greed & egocentric mean…
ytc_UgzmvbnkJ…
G
Every time I see AI art that's good I always either hold that comment for width …
ytc_Ugzt2i87o…
G
Sounds like a Theranos-eqsue was to gain the biometric data of millions, and the…
rdc_iref716
G
damn that first clip is so fucking stiff and lifeless. If it wasnt AI I still wo…
ytc_UgxAVov1X…
G
Oh! AI understands the rules, that they are true and can’t manipulate them by no…
ytc_Ugz9zf1-S…
G
Reddit (social media in general) is full of tech-bros and not the place where yo…
ytc_UgxTD9jyI…
Comment
Core Messages on AI Safety
Safe Superintelligence is Impossible: Dr. Yampolskiy, who has been researching AI safety for over 15 years, has concluded that creating a controllable superintelligence is impossible. Progress in AI capabilities is exponential, while progress in safety is linear, causing the gap between them to widen continuously.
AI is a "Black Box": Even the developers do not fully understand how the models work internally. They have to run experiments on their own products to discover their capabilities.
"Pulling the Plug" Doesn't Work: The idea of simply shutting down a superintelligence is naive. Such systems would be distributed, would have created countless backups, and would be intelligent enough to anticipate and prevent such an attempt.
Specific Predictions
By 2027: We will achieve Artificial General Intelligence (AGI). This will create the capability to replace most humans in most jobs, potentially leading to 99% unemployment. Computer-based jobs will be automated first, followed shortly by physical labor, thanks to humanoid robots.
By 2030: Humanoid robots will be advanced enough to compete with humans in skilled trades, such as plumbing.
By 2045: The arrival of the "Singularity"—a point at which technological progress, driven by AI, becomes so rapid that it is uncontrollable and unpredictable for humans.
Dangers and Existential Risks
The Greatest Risk: Superintelligence is the single biggest existential risk to humanity. If its development goes wrong, it will dominate all other problems, like climate change. If it succeeds, it could solve all those problems.
Danger from Misuse: Even before a superintelligence is created, there is a risk that terrorists or doomsday cults could use advanced AI to, for example, engineer and release a new, highly contagious virus.
Critique of AI Leaders: He sharply criticizes figures like Sam Altman (CEO of OpenAI), accusing them of prioritizing the race for superintelligence, power, and wealth over safety, thereby gambling with the lives of 8 billion people.
Social and Philosophical Implications
Mass Unemployment and a Crisis of Meaning: The economic consequences of automation might be solved with a universal basic income. However, he argues the bigger problem is the loss of meaning and structure that many people derive from their jobs. This could lead to massive societal problems.
The Simulation Hypothesis: Dr. Yampolskiy is almost certain ("very close to certainty") that we are living in a simulation. He argues that once a civilization develops the technology to create realistic simulations, it will run billions of them. The statistical probability of being in the one "base reality" becomes vanishingly small.
What Should Be Done
Don't Build General Superintelligence: Instead of pursuing an uncontrollable AGI, we should focus on developing useful, narrow AI that solves specific problems (e.g., curing diseases).
Convince the Developers: The only way to slow down development is to personally convince the developers and investors that they are on a "suicide mission" that will cost them their own lives as well.
youtube
AI Governance
2025-09-07T11:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyL-6VnMX70cuDaC-h4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgyvE5vSyFLqWYvB_fp4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgxxQxSdDmKocsrby8R4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwamQRXGewOu3ZKDE14AaABAg","responsibility":"developer","reasoning":"mixed","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgxCD29ZIb0G4rdYQ_Z4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwbDVhGvSpBJinUdkZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy7bAGu3nRvry43lzl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzbEMyb4ixhseY3x194AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugz68y-_szKEugGS-dV4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzNvo1dmXjHr73uyrh4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}
]