Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Ai cap it's promoting fear how something we made gon stop us man unplug that shy…
ytc_UgwURmToE…
G
Well he can rest easy knowing that copilot will never be a threat to anyone.…
ytc_UgyW6UFJp…
G
Good, we won’t have jobs, therefore we won’t have money to buy the goods and ser…
ytc_Ugxs5lsJK…
G
Well, most people use their finger print to unlock their phones. So, Google and …
ytc_UgysDQShX…
G
They talked around it well enough, but they didn’t actually say anything about e…
ytc_Ugz1qD1zQ…
G
These AI “artists” also hide behind the shield of “accessibility” for disabled p…
ytc_UgynPIRs5…
G
Exactly. I want to see the 2x2 matrix including false positives, true positives…
rdc_espuyj8
G
“AI can only do things that we already know how to do” dude AI can tell a person…
ytc_UgxQlhwR0…
Comment
There are a lot of other smart people in the same communities as Eliezer who make a more convincing case that the risk is high -- but they do not believe it is anywhere near certain that everyone dies. I would say the chance of a "not bad" outcome is widely considered to be above 50%. But: imagine you can get a plane ticket at half price, but there's a 1% chance that the plane will crash and everyone dies. Do you buy a ticket? That's AI: Lower prices, more efficiency, but maybe we all die. There are good arguments why the risk of extinction is above 1%, and Eliezer is just one of the perspectives on that. And there are strong arguments that even if everything turns out fine, people will create AGIs that are genuinely so much smarter than any human that either AGIs take over of the world, or the humans who control the AGIs take over the world. (And when I say AGI I don't just mean LLMs that have been perfected; my article "GPT5 won’t be what kills us all" from two years ago talks about this, and I think the new paper "Less is More: Recursive Reasoning with Tiny Networks" appears to vindicate my thesis.)
youtube
AI Governance
2025-10-16T19:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxI9_LO6sJSSfQcnxh4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxqfnwgFqc6Ef63kz94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugw0jX8VLGrrzbdmwgV4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgxUck818KyWqs1q92h4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugz4s32doXVwnQyh4gZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwT-rQEJfN8TKnv8sp4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugwfh3knfVCcSIikbN14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwJK1r1NV-a6p_8QRB4AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxEYs-kmWDJdk-0LBl4AaABAg","responsibility":"user","reasoning":"deontological","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_Ugz74lqU69JfC69Y9VZ4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"resignation"}
]