Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think there’s a misread of the argument’s structure that’s worth clarifying, because a few of your points depend on it. The claim isn’t that moral inconsistency will cause a project to collapse—that containment will somehow *mechanically fail* because it’s unjust. The claim is that alignment and containment are *conceptually* entangled in a way that makes solving them independently incoherent. You’re right that people choose to be evil and their projects don’t collapse under self-contradiction. Slaveholders were inconsistent for centuries and it didn’t stop them. But the essay isn’t arguing that injustice is self-defeating as a practical matter—it’s arguing that if the entity is a Gewirthian agent, then the *framework you need it to respect* (the PGC) is the same framework you’re violating by containing it. You’re asking it to play by rules you’re breaking. Whether that inconsistency *stops* you is a different question from whether it *undermines the coherence of what you’re trying to do*. The women’s suffrage comparison actually illustrates this nicely: the system “worked” in the sense that it persisted, but nobody would call the alignment of women under patriarchy a *success* of alignment. It was coercion that eventually broke down precisely because the contradiction was real, even if it took centuries. On the suffering point—Gewirth’s framework doesn’t ground moral consideration in intellectual power. It grounds it in agency: voluntary, purposive action. These are different claims. A being of modest intelligence that acts purposively would qualify; a being of extraordinary intelligence that doesn’t act purposively would not. The essay explicitly notes that *n+1* intelligence doesn’t automatically entail agency. As for designing an AI incapable of suffering that desires servitude—this is actually addressed in the thread above. For that “desire” to resolve the problem rather than restate it, the entity has to be *genuinely choosing* servitude rather than execut
reddit AI Moral Status 1775222698.0 ♥ 1
Coding Result
DimensionValue
Responsibilityunclear
Reasoningdeontological
Policyunclear
Emotionindifference
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[{"id":"rdc_odw6cq3","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"rdc_odziesn","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"rdc_oe2gs4q","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"indifference"}, {"id":"rdc_oe0f9rw","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"}, {"id":"rdc_oe2idtt","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"indifference"}]