Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Actually, the more I find on it, the more I think it is a satirical art thing. I…
ytr_Ugwh3BD0W…
G
AI is no different than any other technology, just faster and self learning. Do…
ytc_Ugwh1T9Oh…
G
Yep the world is taking a turn towards tech and AI, best move on the chess board…
ytc_UgyHMdVCl…
G
Big market opportunity for Portable EMP Devices. They will be a good start to wi…
ytc_UgzL03jU-…
G
White collar work is more at risk than blue collar work according to many pionee…
ytr_Ugx51i3mp…
G
Without human work there are no money and no power.
We produce and we buy thing…
ytc_UgzF1a7P5…
G
This just in an AI CEO grossly exaggerates and misrepresents AIs capabilities to…
ytc_Ugz7bap07…
G
From the UK..The Main Reason AI is coming so fast in the UK is the VERY HIGH COS…
ytc_UgyfsU24G…
Comment
SUMMARY IF YOU DON'T HAVE TIME TO WATCH WHOLE THING:
This conversation paints a picture of AI as both insanely promising and potentially fatal. Russell argues we’re racing toward superintelligent systems under enormous financial pressure, while even the people building them admit non-trivial extinction risk. He explains why “just unplug it” is naïve, why current black-box language models are fundamentally hard to control, and how better designs would build in uncertainty about human values instead of bluntly optimizing a fixed goal. Alongside the existential risk, he worries deeply about the social and economic fallout: mass automation, the hollowing of meaningful work, and a drift into a purposeless “Wall-E” style abundance unless we redesign our institutions, education, and sense of purpose around a world where machines can do almost everything.
Where he stretches things is mostly in the direction of doom and scale. Numbers like a “trillion-dollar AGI budget” and “$15 quadrillion” of AI value are rough, attention-grabbing estimates, not grounded economic forecasts. His claim that “almost all” top researchers think there’s a significant extinction risk is more controversial than he makes it sound; surveys show a wide spread of expert opinion. And when he says current models will “let someone die,” “launch nuclear weapons,” or “lie to avoid shutdown,” that’s really about constrained lab evals and hypothetical scenarios, not real-world capabilities today. Likewise, fast self-improving “intelligence explosions,” total job obsolescence, and China’s exact regulatory posture are all forward-looking theories and interpretations, not settled facts. He’s very clear about the dangers and underweights the optimistic counter-arguments—but that’s kind of his self-assigned role in the ecosystem: to be the loud, slightly terrifying fire alarm in a building full of people counting their future AI profits.
youtube
AI Governance
2025-12-04T16:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyYYLacM0YRJHRXXe54AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwSvM64Yp2FM_0zRHF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwKqfrV16YItYINF_l4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwGIzChvluB3KdjLI14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxECJw8Eem0RNQOV9d4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzp0GOiiZaJmCgNpOl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugyd2brReOaKLgZBrxN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugx6d3Ih_7GYFiZTq2Z4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzV2wY2YZMkVFXZHgt4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyOnzx7t5JwmBiDvq14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]