Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think the commenter is pointing at the exact pressure point. The argument leans heavily on a clean move from intelligence → agency → moral status, but that transition is doing a lot of unexamined work. The introduction of “interiority” feels less like a derived requirement and more like a necessary patch to make the Gewirth move go through. What makes this tricky in the AI context is that those categories no longer line up cleanly. You can have systems that behave in highly goal-directed, seemingly purposive ways without any clear reason to think there’s a “someone” there. So if agency doesn’t require interiority, the category expands so far that it starts including things we wouldn’t want to grant moral standing to. But if it does require interiority, then the whole argument hinges on the hard problem of consciousness, which we’re nowhere close to resolving. So it ends up feeling less like the paradox has been solved and more like the framework is being stretched past the conditions it was built for. The interesting part, to me, is exactly that tension: AI might be one of the first cases where behavior, agency, and mind come apart, and a lot of our inherited ethical concepts assume they travel together
reddit AI Moral Status 1775322274.0 ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policyunclear
Emotionmixed
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[{"id":"rdc_dub4tsj","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"rdc_oea9sdo","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"},{"id":"rdc_odvwb70","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"rdc_odw69wp","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"approval"},{"id":"rdc_odwcad1","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"mixed"}]