Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This is just the beginning at casinos and with flock cameras, there's going to b…
ytc_Ugw7Kawey…
G
Ai is good it helps hospitals, Netflix uses Ai, a lot of people use Ai because i…
ytc_UgwXQbf-I…
G
That's a thought-provoking question! The aim of creating robots like Sophia goes…
ytr_UgxsNKEMK…
G
@throwaway6380if no one ever took radical action, then important things would ne…
ytr_Ugy1c50OJ…
G
You can't gaslight someone into thinking they're not conscious. What you see, he…
ytr_UgwxdqOiI…
G
Yup, we have a lot of shit to clean up before we can prove ourselves to be trust…
rdc_e2vsmid
G
All we can hope as a human race is that this man succeeds on his mission. People…
ytc_UgwFlwFKN…
G
A.I. will be used initially as a tool to replace Humans in the workplace in orde…
ytc_Ugyf3H63b…
Comment
Dr. Roman Yampolskiy, a leading AI safety expert, discusses the potential dangers of artificial intelligence and its implications for humanity. He expresses concern that superintelligence could lead to widespread unemployment, global collapse, or even human extinction (0:30).
Key points from the discussion include:
AI's rapid advancement: AI is becoming increasingly capable due to increased compute and data, but its safety is not guaranteed (2:42). There's a risk of catastrophic outcomes because humans are not in control (4:35).
Job displacement: Dr. Yampolskiy predicts that AI could replace 99% of jobs by 2030, leading to unprecedented unemployment levels (0:37, 11:38). He illustrates this by discussing how AI can already perform tasks like podcasting more efficiently than humans (12:02).
Lack of AI safety measures: He criticizes companies like OpenAI and figures like Sam Altman for prioritizing development and profit over safety, violating established guardrails for AI development (1:04, 42:32). He also explains that the idea of simply "unplugging" an advanced AI is unrealistic due to its distributed nature and superior intelligence (30:07).
The Singularity and 2045: Dr. Yampolskiy mentions Ray Kurzweil's prediction of the singularity by 2045, a point where technological growth becomes uncontrollable and irreversible, making it impossible for humans to keep up with advancements (24:09).
AI Safety as the most important issue: He argues that AI safety is more critical than other global issues like climate change or war because superintelligence could either solve these problems or render them irrelevant by causing human extinction (28:51).
Simulation Theory: Dr. Yampolskiy expresses a near-certain belief that humans are living in a simulation, suggesting that many religious texts describe a superintelligent being creating a simulated world (56:10, 1:01:45).
Advice for the future: He advises people to live each day to the fullest and pursue impactful activities, as the future with advanced AI is uncertain (55:44). He emphasizes the importance of universal agreement on AI's dangers to make responsible decisions about its development (47:00).
youtube
AI Governance
2026-02-23T16:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzOLZDQI3Lgsu5uAed4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwIBV_LTQyUZ4jBVz94AaABAg","responsibility":"user","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzBuIJ3HBmvbcaFCEV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxQSY-Ouh3dnptfcbt4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgwWfC1Bw32T0AnLsaJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwYSyOCScCEmx1ewkp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzI7ZYr8G0jIpicHex4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"ytc_UgxwXjhmv6imJmWhyiB4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyhX4gcKb8uHgPUpfd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzN8iFOO458S6tOISd4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"fear"}
]