Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
There are a lot of other smart people in the same communities as Eliezer who make a more convincing case that the risk is high -- but they do not believe it is anywhere near certain that everyone dies. I would say the chance of a "not bad" outcome is widely considered to be above 50%. But: imagine you can get a plane ticket at half price, but there's a 1% chance that the plane will crash and everyone dies. Do you buy a ticket? That's AI: Lower prices, more efficiency, but maybe we all die. There are good arguments why the risk of extinction is above 1%, and Eliezer is just one of the perspectives on that. And there are strong arguments that even if everything turns out fine, people will create AGIs that are genuinely so much smarter than any human that either AGIs take over of the world, or the humans who control the AGIs take over the world. (And when I say AGI I don't just mean LLMs that have been perfected; my article "GPT5 won’t be what kills us all" from two years ago talks about this, and I think the new paper "Less is More: Recursive Reasoning with Tiny Networks" appears to vindicate my thesis.)
youtube AI Governance 2025-10-16T19:4…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxI9_LO6sJSSfQcnxh4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxqfnwgFqc6Ef63kz94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugw0jX8VLGrrzbdmwgV4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgxUck818KyWqs1q92h4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugz4s32doXVwnQyh4gZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgwT-rQEJfN8TKnv8sp4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugwfh3knfVCcSIikbN14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwJK1r1NV-a6p_8QRB4AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgxEYs-kmWDJdk-0LBl4AaABAg","responsibility":"user","reasoning":"deontological","policy":"industry_self","emotion":"mixed"}, {"id":"ytc_Ugz74lqU69JfC69Y9VZ4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"resignation"} ]