Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
i hope so, but there are no guarantees. on the other hand, we can have an informed discussed about the distribution of consciousness even without solving the hard problem. we're doing that currently in the case of consciousness in non-human animals, where most people (including me) agree that there is strong evidence of consciousness in many soecies. i think it's conceivable we could get into a situation like that with AI, though there would no doubt be many hard cases. i do think that when an AI is "apparently sentient" based on behavior, we should adopt a principle of assuming it is conscious, unless there's some very good reason not to. and if it's conscious in the way that we are, i think prima facie its life should have value comparable to ours (though perhaps there will also be all sorts of differences that make a moral difference).
reddit AI Moral Status 1487791986.0 ♥ 8
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policyunclear
Emotionresignation
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[{"id":"rdc_de2k3ds","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"rdc_de2mgs2","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},{"id":"rdc_de2q9jg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},{"id":"rdc_de2tjrw","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"},{"id":"rdc_de2txwo","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"}]