Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
It’s a complicated issue. We’re probably nowhere close to even understanding how *our own* consciousness works, and that’s what confounds and hampers this debate at every turn. We all know (or at least we all SHOULD know) that LLMs aren’t sentient minds. But are the functions they perform, if they’re done well enough, a *component part* of sentience? That’s a more difficult question. We have a strong intellectual and philosophical framework for evaluating whether an operating system is Turing-complete, such that we can say which elements on the checklist are present and which are still needed. We have nothing like that framework to evaluate whether a given agent is sentient, but I would be the very opposite of shocked if it turned out that one of the seven (let’s say) core elements of sentience turned out to be something like what LLMs are doing.
reddit AI Moral Status 1739934694.0 ♥ 6
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionresignation
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_mdj90ls","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"rdc_mdjfk24","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"rdc_mdjn2zl","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"rdc_mdjvktv","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"rdc_mdj9r4l","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"} ]