Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Coming across AI art is always disturbing. At first glance, it can be stunning u…
ytc_UgzKiEBEP…
G
Howie is a class A Plonker. Probably why AI is bad, Howie really should only be …
ytc_Ugybe9ru3…
G
Everyone laughs when that robot says ok I will destroy humans. That is reality o…
ytc_UgwnGcgHX…
G
“They will develop their own language”.
IF WE LET IT! This is NOT inevitable li…
ytc_UgxYEfX4s…
G
I loved the Robin Williams movie where the android robot wanted to be human ...…
ytc_Ugw2zAAlk…
G
I looked for reference Imaged of girls in wet hoodies for a drawing I made where…
ytc_UgyiHnaL8…
G
That's an interesting observation! The constant movement of robots like Sophia c…
ytr_UgzMx8D7P…
G
Alot of this Ai is the direct threat because most of it is invented with the sou…
ytc_UgzL_Z31j…
Comment
At 45:29 Eliezer tells a story with a rhetorical trick that clouds the very argument he is making. He speaks of AI killing all of humanity, and humanity fixing AI so it won't happen again, and it happens again, and we fix it again, and the pattern repeats. Then he points out that AI only has to kill all of humanity once, and humans can't learn from our own global extinction.
But species extinction in the natural world is a better analogy and more nuanced than what was discussed. Natural selection has shaped every species and extinguished many over millions of years. On the other hand, in a few hundred years, humankind has created conditions that have caused massive extinctions. But there is a recognizable pattern to these human-caused extinctions and a series of steps prior.
We need to learn how to identify and respond to the many warning signs that will come well before an AI-driven extinction. And we need to structure the releases and free movements of each AI so that we have many opportunities to learn and fix.
youtube
AI Governance
2025-12-01T19:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwfuJldpu13N5yIjgJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwQtPiShjExt_Mm1Vp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugzd-yLBfi9WMHa4g0p4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwnLFQDQY47429ZVih4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyENebu1tHpusFjJtd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugz0siumGK2Szqinj4x4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwDEYCalO3RoZEuB_J4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzdMVHYcOIlKj399Gl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzsQj-ugSeNf558p_d4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugw-wjLHTVGqJ0RSS-t4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}
]