Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I get where you are coming from and agree with the gist: LLMs are not thinking or reasoning. They aren’t intelligent. They cannot differentiate between fact and fiction and they don’t even care about the difference. They don’t have spontaneous thoughts; they only generate output to your queries. There’s no underlying curiosity about … well, about anything. But that doesn’t mean that it can’t generate some insightful and useful replies. It’s an illusion, but is a useful illusion.
reddit AI Moral Status 1750949233.0 ♥ 5
Coding Result
DimensionValue
Responsibilitynone
Reasoningdeontological
Policynone
Emotionapproval
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_n00mhqk","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"rdc_mzw01o8","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"rdc_mzw6999","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"rdc_mzx2sgd","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"rdc_mzw3xgs","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"} ]