Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I’m with you. From the outside, LLMs are _effectively_ conscious, as in basically indistinguishable in Turing tests if properly set up. I’d even argue that discussing with 90% of human redditors would feel more mechanical than talking to a LLM, due to how rigid their opinions are. Looking at an LLM’s inner workings makes sense, of course, but human brains also work wildly different. I for example don’t have an inner monologue. People who do would surely consider my mind to be completely alien—and vice versa. It’s a philosophical dilemma of no consequence.
reddit AI Moral Status 1739927110.0 ♥ 12
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionapproval
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_mdj90ls","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"rdc_mdjfk24","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"rdc_mdjn2zl","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"rdc_mdjvktv","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"rdc_mdj9r4l","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"} ]