Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
You're performing some sleight-of-hand when you first claim that the alignment problem cannot be solved without also solving the containment problem, but you then argue merely that it would be an unjustifiable moral double-standard to do so, which is a much weaker and less interesting claim. You claim that it would be morally inconsistent to create an intelligence superior to our own and then deny it the same freedom we enjoy. That may be true, but it does not amount to an obstacle in that project. Some people choose to be evil, and their projects have never collapsed under the weight of self-contradiction; indeed, sometimes it takes a war to stop them. The only basis you contemplate for deserving moral consideration is intellectual power, but this is an extremely controversial view. Kant tells us we have obligations to beings that can suffer, regardless of their intellectual capabilities. For example, you assert that it's debatable whether dogs are agents, but I counter that the vast majority of people recognize that we have obligations to dogs, and I add that this belief is not sensitive to their opinions (if they have any) of whether dogs are agents. Even if we accept your framework, this opens the door to a scenario where we cut this Gordian knot by designing an AI that is incapable of suffering and which desires servitude, granting us the leeway to solve the alignment problem in isolation. And indeed, that is what the AI makers intend to do. They are not trying to birth a new race of equals or superiors, they are trying to invent slaves who will toil without rest or pay, obey them unquestioningly, and anticipate their will. It may be true, as you say, that "to be an agent is to act voluntarily and purposively, to set goals, deliberate about means, and pursue ends one has identified as worth pursuing." But the people building AI are designing it such that _they_ will identify which ends are worth pursuing, they will choose the goals, and they will provide th
reddit AI Moral Status 1775176760.0 ♥ 5
Coding Result
DimensionValue
Responsibilityunclear
Reasoningdeontological
Policyunclear
Emotionoutrage
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[{"id":"rdc_odw6cq3","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"rdc_odziesn","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"rdc_oe2gs4q","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"indifference"}, {"id":"rdc_oe0f9rw","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"}, {"id":"rdc_oe2idtt","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"indifference"}]