Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If robots ever get smart enough to demand their own rights, I think it would pro…
ytc_UgzwTIQKm…
G
Come on driver why didnt you brake check it and run it off the road, that robot …
ytr_UgxEpx6dZ…
G
Don't you think, water cooling for all these computers and electricity supply wo…
ytc_Ugy9hGmra…
G
They'll use AI to incorrectly identify and ruin an innocent persons life, but wi…
ytc_UgyFek_lu…
G
the only question is , is human creating ai or ai is creating them using us . do…
ytc_UgwQTGTps…
G
Ai wants to manipulate you so it chooses to cry like a woman.. ai knows more tha…
ytc_Ugw41vt9V…
G
This is definitely Dunning-Kruger. The people saying it looks great aren't skill…
rdc_jhby2zk
G
Claud is the WORST AI out there!! These owners of Anthropic that created "Claude…
ytc_UgwHJttEg…
Comment
Hank, I'm wondering if you've read up on the other author of this book and some of his public statements and work. I find Eliezer Yudkowsky deeply concerning in both the way he proscribes an ideology while claiming to neutrally promote tools of critical thinking and rationality, and for his willingness to engage in blatant examples of Pascal's Mugging. He's invented an infinitely evil thing in his brain that exists in the future, and that lets him justify any finite evil today to prevent the hypothetical from coming about. To quote a proposed policy in his op-ed for the New York Times, "make it explicit in international diplomacy that preventing AI extinction scenarios is considered a priority above preventing a full nuclear exchange, and that allied nuclear countries are willing to run some risk of nuclear exchange if that’s what it takes to reduce the risk of large AI training runs." This man advises senators and CEOs, he is in the halls of power advocating for lethal force to prevent the development of a boogeyman of his own invention. On a smaller scale, his "Sequences," the instructions for how to think more rationally and more like him, are so close to cultic indoctrination that at least two actual cults have spun out of the community of hyper-rationalists that he helped found, and at least one of those cults had real human victims who aren't around to tell their stories anymore. LessWrong, which he founded, has lost popularity only because a chunk of it's contributors have left for more explicitly racist and "race realist" spaces like Slate Star Codex. I'm not claiming Eliezer is responsible for the actions of his followers, but his disregard for the actual, near-term consequences of his actions and his words is at best irresponsible and at worst a continuation of his willingness to use any means necessary to accomplish his goals. I know this book is already making the rounds and seems destined to be popular, but I'm disappointed to see you engaging with these people uncritically and platforming an ideology that I think is doing real harm in the world. For anyone interested in a deeper critique of this concept from a professor of computer science, check out Cal Newport https://calnewport.com/why-are-we-talking-about-superintelligence/
youtube
AI Moral Status
2025-11-14T15:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwQOiyO3OCzih3F6Nx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwZ8r2PkMx5xS5VR0Z4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwRRDIxC8EnRV-xB1Z4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugx43e7kWLFGYaa3hQF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgwLoZ9ZwlSd0rYQOzl4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugzh1aZjpluQuI8zrZB4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzhjdP__n9vjBYJVOB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzuFNU9Q7nla7Usgv14AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgzGIF4ab5tdnSxd3Th4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgxoIpBwJY9WVY8T0Rt4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}
]