Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
>Sorry, You Don't Actually Know the Pain is Real That's completely consistent with OP's post. OP might personally believe the pain is real (seems like they do; I don't), but they didn't argue that. They just argued, reasonably, that we can't have certainty that the observable emotional pain is fake. That is reasonable. We do not understand human consciousness well enough to intentionally replicate it perfectly (which doesn't mean we can't "luck" out when explicitly building something modelled off part of how we think the brain works), and we don't understand human consciousness well enough to assert that an LLM bears no similarity to it. As a hypothetical, it is a possibility that in some part of how our brains function there is something analogous to a prediction engine for concepts which our consciousness derives from, and it is also possible that a classical computing prediction engine which is powerful enough can achieve a similar end result *in the ways that matter*. I'm not claiming that and I don't believe that, but the certainty with which people say "It can't feel pain because it's [XYZ thing that we built]" is unfounded.
reddit AI Moral Status 1676628221.0 ♥ 19
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policyunclear
Emotionindifference
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[{"id":"rdc_j914woe","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"rdc_j8wt0sj","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"},{"id":"rdc_j8v0w3f","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},{"id":"rdc_j8vzo3j","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},{"id":"rdc_j8w3ud4","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"}]