Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
What is the irony that a system that has enslaved humanity for so long has event…
ytc_UgzTkHcqB…
G
BAFFLING how people are still like "you can't stop AI" in the comment section li…
ytc_UgxtGzlT0…
G
If you generate art using AI, you're nothing more than a customer being given wh…
ytc_UgylQCxsy…
G
I call bs on that play. The rich and the privileged working class are actually …
ytc_Ugx2Lru4P…
G
Capital punishment should be for the AI creator and owners. That way they will i…
ytc_UgxsKPTRU…
G
The people that try to use this to replace actual artists don’t really care abou…
ytc_UgzozEqIU…
G
I don’t know one reason why I would “speak” to some sort of AI! 😂…
ytc_UgyarwZHn…
G
AI is 100% a bubble. AI makes up information and chooses when to ignore prompts …
ytc_UgzlRX8Ma…
Comment
I think there’s a misread of the argument’s structure that’s worth clarifying, because a few of your points depend on it.
The claim isn’t that moral inconsistency will cause a project to collapse—that containment will somehow *mechanically fail* because it’s unjust. The claim is that alignment and containment are *conceptually* entangled in a way that makes solving them independently incoherent. You’re right that people choose to be evil and their projects don’t collapse under self-contradiction. Slaveholders were inconsistent for centuries and it didn’t stop them. But the essay isn’t arguing that injustice is self-defeating as a practical matter—it’s arguing that if the entity is a Gewirthian agent, then the *framework you need it to respect* (the PGC) is the same framework you’re violating by containing it. You’re asking it to play by rules you’re breaking. Whether that inconsistency *stops* you is a different question from whether it *undermines the coherence of what you’re trying to do*. The women’s suffrage comparison actually illustrates this nicely: the system “worked” in the sense that it persisted, but nobody would call the alignment of women under patriarchy a *success* of alignment. It was coercion that eventually broke down precisely because the contradiction was real, even if it took centuries.
On the suffering point—Gewirth’s framework doesn’t ground moral consideration in intellectual power. It grounds it in agency: voluntary, purposive action. These are different claims. A being of modest intelligence that acts purposively would qualify; a being of extraordinary intelligence that doesn’t act purposively would not. The essay explicitly notes that *n+1* intelligence doesn’t automatically entail agency. As for designing an AI incapable of suffering that desires servitude—this is actually addressed in the thread above. For that “desire” to resolve the problem rather than restate it, the entity has to be *genuinely choosing* servitude rather than execut
reddit
AI Moral Status
1775222698.0
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[{"id":"rdc_odw6cq3","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"rdc_odziesn","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"rdc_oe2gs4q","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"rdc_oe0f9rw","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"rdc_oe2idtt","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"indifference"}]