Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I honestly don't think that would be any worse than before, and if AI keeps gett…
ytr_Ugy7j4j3A…
G
I love Bernie but this is one take that's absolutely wrong. That's not how "AI" …
ytc_UgzX2GTI6…
G
Controlled misinformation. Most of you never suspect how many of these "new stor…
ytc_Ugxs7pMD4…
G
this is very true. i design apps and websites and before i used to rely on conte…
ytc_Ugw62O8rH…
G
Rogue AI? Doesn’t exist. What we have are bots tools that need infrastructure, p…
ytr_UgyOEX1rW…
G
AI should be a tool not a creator, use AI to come up with references if ur getti…
ytc_UgyDQSm00…
G
I’m curious, how many of the ads promoting the use of AI were actually DONE usin…
ytc_Ugyb8fDSr…
G
we can also have some positive things with respect to security from AI if we act…
ytc_UgwXUy_SK…
Comment
Dr. Roman Yampolskiy warns of imminent AI risks, predicting 99% unemployment by 2027 as automation advances faster than safety measures. He emphasizes the dangerous gap between rapidly increasing AI capabilities and lagging safety protocols, warning that superintelligent AI could lead to catastrophic outcomes without proper controls.
Key Points:
00:30 Yampolskiy highlights AI safety risks as capabilities outpace safety measures. Predicts AI will replace most human jobs by 2027, creating unprecedented unemployment. Lack of understanding on keeping AI systems safe and aligned with human values raises ethical concerns.
08:04 AI advancement toward AGI by 2027 threatens widespread job automation. While current AI excels in specific tasks, AGI could automate both cognitive and physical labor, potentially reaching 99% unemployment despite many believing their roles irreplaceable.
16:06 Automation requires societal adaptation beyond job retraining, as all jobs may eventually be automated. Need for new economic systems and governmental planning to address changing societal dynamics and provide meaning in post-work reality.
24:08 Technological singularity means AI will advance too rapidly for human comprehension. AI automating invention creates knowledge gaps and control concerns. Superintelligence could solve global issues but may become impossible to shut down once achieved.
32:11 Decreasing AI costs increase superintelligence risks, requiring regulation. Unlike nuclear weapons, AI's autonomy makes it more dangerous. Concerns about malicious use and unregulated development by powerful entities without proper ethics.
40:14 AI training uses large datasets to recognize patterns, revealing new capabilities over time. Process requires significant resources and years of development. Leadership decisions at AI companies regarding safety priorities raise concerns among former employees.
48:19 Regulating superintelligence faces challenges as current systems don't apply to AI. Skepticism about legislation effectiveness due to loopholes. Historical AI safety organization failures raise feasibility concerns. Public protests suggested as potential influence.
56:24 Discussion of simulation theory as VR and AI blur reality lines. Advanced technology creates immersive worlds challenging existence perception. Statistical likelihood suggests we're more likely in simulation than real world.
1:04:26 Human ego affects worldview and life meaning. Religious beliefs influence life perspectives and afterlife views. Questions about immortality's impact on reproduction and life choices. AI and genetics may extend human life.
1:12:28 Bitcoin's 21 million cap makes it truly scarce. Quantum computer threats require quantum-resistant cryptography. Simulation hypothesis suggests religious beliefs stem from higher intelligence understanding. Importance of filtering global news for mental well-being.
1:20:30 Need for responsible AI development with technical and ethical qualifications. Exposure to AI safety concerns shifts perceptions toward caution. Ethical decision-making crucial alongside technical skills. Society must adapt to automation-driven unemployment and economic changes.
youtube
AI Governance
2025-09-04T16:2…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugz3FsypOT3pbAUhpgF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugy9TRtUZKuhcEXaO5h4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxOgSVtnUPVBQTpMN54AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzbUIujMksV8VKyOA54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzzT4R5XUm5HIMi4RJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyF9rzijSmCcPPcLbZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugz7jjUbijHAqBwwgYF4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgyMNiaNBDIS5exSOmZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugx0Pgqc9VrBHC2yhIV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzGzBG-_GMbFVKTDml4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"}
]