Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
>If consciousness is included then would "moral status" extend to other animals? Yes, insofar as we think that animals are conscious. >I also want to note that I don't think our morality can be decoupled from our feelings so how is the robot to be programmed to map to our morality when we're trying to disregard our feelings as being at least a partial determiner of them? There's different senses of being moral, I suppose, but "not causing unnecessary suffering" and "not infringing upon people's rights" and other criteria are action based, so it's theoretically straightforward to program a machine to not do those harmful things even if it has no emotions or consciousness. There has been some work on this, see r/AIethics and https://www.reddit.com/r/AIethics/comments/4y2pof/machine_ethics_reading_list/.
reddit AI Moral Status 1487178517.0 ♥ 8
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policyunclear
Emotionindifference
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[{"id":"rdc_dds0ck6","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"rdc_dds3a6a","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"indifference"},{"id":"rdc_dds2y55","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"},{"id":"rdc_dds4pao","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"},{"id":"rdc_dds5e0b","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}]