Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This is their point. The LLM, nor a book, are sentient. It’s just that an LLM can appear to be (by design) and that has people wanting it to be true, especially if they feel like they are important to it. I say this only highlights mental health struggles and the inadequacies of people getting the support they need from actual people. I’m not saying it’s bad someone feels better interacting with an LLM if it helps their mental health, but let’s not over anthropomorphize the tools. We wouldn’t with a book.
reddit AI Moral Status 1749755017.0 ♥ 123
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionmixed
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_mxhlcm0","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"rdc_mxihokf","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"rdc_mxfjril","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"rdc_mxfru8t","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"rdc_mxfhzin","responsibility":"user","reasoning":"unclear","policy":"unclear","emotion":"outrage"} ]