Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Thanks for the feedback, I edited the OP to try to address some of your criticisms, I'll do it again here just to be more explicit. 1: Any given person must have greater, less, or equal moral value to myself. I define morality as applying to any actions/thoughts/etc which affect the well being of sentient beings. In other words, the very ability of a mind to have and experience desirable or undesirable sensations is what gives it moral value. A rock or wholly unsentient thing like perhaps bacteria has no moral value; as sentience increases, so does moral value. We humans seem to be the most sentient beings that we know of existing but this morality could certainly apply to any form of sentience, such as the more intelligent animals, or in the future to self-aware AI. How sentient non-human things are seems to depend a lot on science, granted. 2/3. It is logically inconsistent (e.g. false) for any person to have greater or less moral value than myself. Therefore, everyone has equal moral value to myself. As per what I just said above, and also per 10), this isn't actually what I'm claiming. I'm claiming that we should assume by default that other persons which appear to be mentally (by which I mean the capacity to experience desirable and undesirable sensations) about the same as us should also be default to be considered morally equal to us. For other persons (by which I mean any kind of sentient mind) that are obviously different from us, this is not necessarily true. For example, science would seem to indicate that while dogs are intelligent/sentient, certainly moreso than ants, they are probably not as sentient/intelligent as humans are, so actions that place a higher moral value on humans than dogs seems to be justified. I think you could make the same case for obvious sociopaths/psychopaths. Individuals that have no capacity for empathy or who even take pleasure in the pain of others have an obviously different capacity to perceive or desire posit
reddit AI Moral Status 1415090450.0 ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningutilitarian
Policynone
Emotionapproval
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_n8jknk3","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"rdc_n8j76rx","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"rdc_n8jdfel","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"rdc_clrt2bh","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"rdc_clsif6k","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"} ]