Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It was truly intriguing and enjoyable to watch a brilliant philosopher mess with…
ytc_UgzQuIEox…
G
@mjolninja9358 You tell the AI specifically what you want to see. Prompting is n…
ytr_UgwJP_mXW…
G
I had to interview a job candidate like that recently. I didn't say it to him, _…
rdc_nlzoeku
G
You get replaced by AI. You get replaced by Ai. Everybody gets replaced by AI!…
rdc_m80ut8s
G
As a painter for the last 20 years, I just had to shutter my studio due to decli…
ytc_Ugyq7_gzP…
G
“Predictive Policing” = wasting time and resources, committing violent crime and…
ytc_UgwzVsMWW…
G
Why would someone want to learn how to use ai, real art is just better…
ytr_Ugx73ansa…
G
Why would you want to make something that is smarter than humans? We humans are …
ytc_UgzsTBAPd…
Comment
Well, part of that is that Yudkowsky and MIRI, the organization Yudkowsky founded, are part of a ... kind of a new religious movement. They're "Rationalists", which is a weird, somewhat culty community that sprung up around Yudkowsky's blog and forum LessWrong, and I think most if not all Rationalists are also Singularists, so that's just one of their beliefs: that super smart artificial intelligence is either likely or inevitable in the near future (the event where AI becomes super AI is called the Singularity).
They think the danger with the Singularity is not that it happens, but that it happens accidentally and that it's not the godlike benevolent super AI that they want.
Oh, and by the way, Rationalism and Effective Altruism often have a lot of overlap, so these people also think that in order to min-max the amount of goodness in the future, one needs to develop an AI smarter than humans to make better ethical calculations that humans can. Thus, to do the most good, one needs to spend their life bringing this super AI into existence as quickly as possible. Yudkowsky and a lot of other people in this space are big into cryonics, too, so they also think you should cryogenically freeze yourself after a long life of trying to hasten the arrival of a benevolent super AI who will in the future develop methods to revive you.
So, yeah, it's not the *interview* that makes this book seem unreasonable. I think interviewing Yudkowksy and Soares without the context of their wild beliefs makes the book sound far more reasonable than it should.
youtube
AI Moral Status
2025-11-02T21:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytr_UgyvQRoflPZn7t69o_x4AaABAg.AP2QGordsKqAP4U16ElfC3","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytr_UgyprtK2H6ZnQccEmax4AaABAg.AP2LElv1ficAPAH6oAP5Lg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_UgyprtK2H6ZnQccEmax4AaABAg.AP2LElv1ficAS2wKizF0w8","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_UgwCMEtyTtZwynwkXrV4AaABAg.AP1lM-KdbLQAP4Oidn1uw9","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytr_UgwCMEtyTtZwynwkXrV4AaABAg.AP1lM-KdbLQAPCY0MmjHmx","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytr_Ugzrp8HbL5oyccS7tDh4AaABAg.AP1jrhQEhXQAP1xLlAG7D-","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_UgwRM1UtUh06iVVjG654AaABAg.AP1_ZRcpUnuAP1ykCa0r7U","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_UgwqbApXzc0IwtAuhjV4AaABAg.AP1FQf2hs4-AP1zdo8IbnA","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"disapproval"},
{"id":"ytr_Ugw2SqS_h8aKB6KVPI94AaABAg.AP1A9_s-JYIAP1Doud4CvW","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"disapproval"},
{"id":"ytr_Ugw2SqS_h8aKB6KVPI94AaABAg.AP1A9_s-JYIAPFPAAXcqJX","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"disapproval"}
]