Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
And we greatly underestimate animals. How can we tell when something puts meaning behind signs or if they are just mimicking like a parrot? Or just making sounds based on some hardcoded instructions like bird songs? It's often some kind of ratio of capacity to make logical decisions and operating on instinct. Humans also have certain instincts, like to follow the crowds when uncertain of direction to take. [Philosophical zombie ](https://en.wikipedia.org/wiki/Philosophical_zombie)concept comes to my mind. One could say LLM is literally one, as it imitates speech, but there's no thought (as we understand it) behind its words. But it is necessary, when pattern recognition is enough to use words in correct context? I also often bring up the [Chinese Room](https://en.wikipedia.org/wiki/Chinese_room), because it's more apt. In the end though, does it even matter? We could debate about this, and people will still choose to believe whatever they want, regardless of how it affects their mental health.
reddit AI Moral Status 1739955727.0 ♥ 13
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policyunclear
Emotionindifference
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[{"id":"rdc_mdl3jad","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"rdc_mdl9l5o","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"rdc_mdmybdw","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},{"id":"rdc_mdo4ici","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},{"id":"rdc_mdo6yx8","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}]