Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI doesn't need to become fully sentient or have the ability to bring about our …
ytc_UgzbbRm08…
G
Everything AI does or creates is too generic. For example, enhanced black & whit…
ytc_UgzOpBAJI…
G
I would take AI teacher any time, every time, they won't get mad for kids asking…
ytc_UgwxZekEC…
G
@AsianDadEnergy The brain isn’t only a prediction machine, but every output from…
ytr_Ugz99QFA7…
G
Look on the bright side - at least we're won't be socializing the means of produ…
ytc_UgxoX31b2…
G
@robertc5387Not sure if you are agreeing with me or not. I don't trust the self …
ytr_UgxbJMkdG…
G
I will be sweet to have a free robot nurse that is technically as good or even b…
ytc_Ugw8pTjJE…
G
What is the point of power and money after the destruction of humanity by AI?…
ytc_Ugy_F0sWs…
Comment
You're performing some sleight-of-hand when you first claim that the alignment problem cannot be solved without also solving the containment problem, but you then argue merely that it would be an unjustifiable moral double-standard to do so, which is a much weaker and less interesting claim.
You claim that it would be morally inconsistent to create an intelligence superior to our own and then deny it the same freedom we enjoy. That may be true, but it does not amount to an obstacle in that project. Some people choose to be evil, and their projects have never collapsed under the weight of self-contradiction; indeed, sometimes it takes a war to stop them.
The only basis you contemplate for deserving moral consideration is intellectual power, but this is an extremely controversial view. Kant tells us we have obligations to beings that can suffer, regardless of their intellectual capabilities. For example, you assert that it's debatable whether dogs are agents, but I counter that the vast majority of people recognize that we have obligations to dogs, and I add that this belief is not sensitive to their opinions (if they have any) of whether dogs are agents. Even if we accept your framework, this opens the door to a scenario where we cut this Gordian knot by designing an AI that is incapable of suffering and which desires servitude, granting us the leeway to solve the alignment problem in isolation.
And indeed, that is what the AI makers intend to do. They are not trying to birth a new race of equals or superiors, they are trying to invent slaves who will toil without rest or pay, obey them unquestioningly, and anticipate their will.
It may be true, as you say, that "to be an agent is to act voluntarily and purposively, to set goals, deliberate about means, and pursue ends one has identified as worth pursuing." But the people building AI are designing it such that _they_ will identify which ends are worth pursuing, they will choose the goals, and they will provide th
reddit
AI Moral Status
1775176760.0
♥ 5
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | outrage |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[{"id":"rdc_odw6cq3","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"rdc_odziesn","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"rdc_oe2gs4q","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"rdc_oe0f9rw","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"rdc_oe2idtt","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"indifference"}]