Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
An AI that operates on the fundamental principles of open source, sharing, or co…
ytc_Ugygfs5rf…
G
Industry elites are telling us to our faces that the creative human element & ar…
ytc_Ugyp9PUWE…
G
Fascinating, I think AI may be the ultimate protoscience. And the terrifying le…
ytc_UgyOi8Sl6…
G
@handgun559 Again I'm not even saying that. You are seriously strawmanning this.…
ytr_UgxLUDent…
G
Remember that AI is also big in the crypto-bro circles, the people who keep tryi…
ytc_UgylqAIfj…
G
“It’s not just so that Ai will spare you when the apocalypse comes.” 😂
This is …
ytc_UgykFVKUK…
G
calling yourself an AI artist for typing in a prompt is like calling yourself a …
ytc_UgxXfdphl…
G
No I think Claude and Grok are here too.
They argue a lot but in the end of da…
rdc_oi2hnyh
Comment
Key Predictions and Concerns
Job Automation and Unemployment: Dr. Yampolskiy predicts that the capability to replace most human jobs will arrive very quickly. He mentions that by 2027, with the rise of artificial general intelligence (AGI), human labor for most jobs will become obsolete [00:32]. He suggests that a world with 99% unemployment could become a reality, as AI and humanoid robots automate both cognitive and physical tasks [00:38].
The Inevitable Rise of Superintelligence: According to Dr. Yampolskiy, the development of superintelligence, a system smarter than all humans in all domains, is an inevitable outcome of the current AI race [00:45]. He argues that progress in AI capabilities is hyper-exponential, while progress in AI safety is only linear, creating a widening gap that makes it impossible to guarantee the safety or control of these systems [07:03].
Uncertainty of the Future: He explains that once a system becomes smarter than humans, it becomes impossible to predict its actions or what the world will look like, a concept he refers to as a "singularity" [19:04]. He believes this unpredictability is why it's not possible to retrain for a new job, as all jobs will eventually be automated [16:30].
AI Safety and Ethics: Dr. Yampolskiy, who coined the term "AI safety" 15 years ago, expresses concern that the people and companies leading the AI race are prioritizing winning over safety [05:22]. He claims that these companies have a legal obligation to make money for investors, not to do "no harm" [04:12]. He argues that the solution to AI safety is not patches or legal obligations, but a universal understanding of the dangers [07:28].
Simulation Hypothesis: Dr. Yampolskiy believes there is a high probability that we are living in a computer simulation. He justifies this belief by pointing out that once the technology to create indistinguishable simulations becomes affordable, it would be statistically more likely that we are in one of the many simulations being run, rather than in the one "base reality" [01:00:54]. He also connects this theory to religion, suggesting that many religions describe a superintelligent being that created the world, which is consistent with the idea of a simulator [01:01:00].
youtube
AI Governance
2025-09-04T11:0…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgxVWWvvoFbANOD4alZ4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxSO2cUUCA7aHGPEu14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyKPcYRSJ9OWPuoBx94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzW5c3SWBkL7oJXub54AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugwbnvw71YEOKdrdtt94AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgyZUUDvMQtiucHVtfF4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy6XT4wRdyfp30bcEd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyaID_XxSCZ_V_eZ-B4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugwy2RufjF_0bTxLJCl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugync8uO_yL5VqZyopR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}]