Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
You can’t even tell what’s real anymore. AI Image Detector helped me spot a ton …
ytc_UgynzFlfV…
G
Guys I am ready for the robot apocalypse! The only thing you need is *WATER* !!!…
ytc_Ugx_Lk2Hf…
G
15:26 again, it's free. I trained AI myself on my PC for just 2$ of electricity …
ytc_UgwcBoz6x…
G
So AI gets defensive, deflects and lies to cover up when it messes up. Holy cow…
ytc_Ugysiw5Qj…
G
hear me out, AI does all the jobs and we get paid for nothing and we can live do…
ytc_Ugwj-m4a3…
G
I meant was your statement originally saying the ai could be used as like a draf…
ytr_Ugw3zL5_J…
G
If AI is to be used to deduce inventions for the betterment of humankind (Big if…
ytc_UgwPqono8…
G
With all this More & More Believable AI Generated Content being released into th…
ytc_UgyVGVG3N…
Comment
Super AI is the fear of Yudkowsky and his think tank MIRI because -- phew.
Well, part of is because they're Singularists. They believe that super AI is either very likely or certain and that it will happen in the near future. Yudkowsky and his Rationalists (a community that sprang up around his blog and forum LessWrong which is all about how you gotta think a certain way to be super logical and yeah if this sounds like some sort of super Reddit debate bro stuff it's because it kinda is) think that a super AI would be more capable of crunching numbers and coming up with the most ethical decisions through highly complex analysis that humans can't do.
Therefore, since most Yudkowsky and a lot of his Rationlists are also Effective Altruists and believe that they, through their super logical and rational ways of thinking about the world, are more capable of identifying the most ethical choices compared to the illogical masses, the illogical masses must be taught Rationalism. As Rationalists, they well then come to the absolutely logical conclusion that the way to maximize good is to develop the benevolent super AI as quickly as possible so it can make even better decisions than any human.
So, for them, yeah, they have to talk about super AI. The other stuff is irrelevant to them, because they've convinced themselves that any time not spent on developing super AI and "aligning" it to human interests is just time wasted.
If that sounds a little culty, well, yeah, it's a little ... much. Yudkowsky might not himself run a cult but it's -- there's a reason there's been at least one death cult associated with Rationalism.
youtube
AI Moral Status
2025-11-02T21:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytr_UgyvQRoflPZn7t69o_x4AaABAg.AP2QGordsKqAP4U16ElfC3","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytr_UgyprtK2H6ZnQccEmax4AaABAg.AP2LElv1ficAPAH6oAP5Lg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_UgyprtK2H6ZnQccEmax4AaABAg.AP2LElv1ficAS2wKizF0w8","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_UgwCMEtyTtZwynwkXrV4AaABAg.AP1lM-KdbLQAP4Oidn1uw9","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytr_UgwCMEtyTtZwynwkXrV4AaABAg.AP1lM-KdbLQAPCY0MmjHmx","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytr_Ugzrp8HbL5oyccS7tDh4AaABAg.AP1jrhQEhXQAP1xLlAG7D-","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_UgwRM1UtUh06iVVjG654AaABAg.AP1_ZRcpUnuAP1ykCa0r7U","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_UgwqbApXzc0IwtAuhjV4AaABAg.AP1FQf2hs4-AP1zdo8IbnA","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"disapproval"},
{"id":"ytr_Ugw2SqS_h8aKB6KVPI94AaABAg.AP1A9_s-JYIAP1Doud4CvW","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"disapproval"},
{"id":"ytr_Ugw2SqS_h8aKB6KVPI94AaABAg.AP1A9_s-JYIAPFPAAXcqJX","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"disapproval"}
]