Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The issue is, you're making the leap between a probabilistic understanding of reality, and pretending that probabilistic understanding is automatically an objective understanding of reality. In theory the two are very similar - other minds probably experience the world same as I do, they probably have vaguely similar preferences, the observed world is probably real, etc... - but that's a huge difference between thinking those as a set of useful assumptions, and knowing any of them for certain. Even at the most trivial level, there are plenty of subjective experiences that people have radically different reactions to - pain and privation, sexual experiences, what brings life satisfaction, etc. Not only are those experiences different, the way people interpret them is radically different depending on their worldview and understanding. >Now does any of that prove objective morality? I daresay it doesn't. But by the same token, nothing can prove objective reality either. I'd say that the 10 points above prove an objective morality, or at least a very workable and practical and pragmatic morality, about as well as it is possible to be proven Here's the crux of the problem - it points to the idea that some vague, general principles can be commonly held. You can make a materialist argument for why people should probably follow the golden rule, for example. But when you actually drill down to specific moral issues, you're no further ahead than when you started. There are still plenty of moral arguments you can make, starting with the exact same starting assumptions, and come to radically different conclusions. The consequences of the assumptions you're making here are that you are in the position of a radically subjective morality, which is virtually powerless to make any prescriptive judgements on anyone's behaviour beyond the most pointlessly destructive types.
reddit AI Moral Status 1415034364.0 ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionmixed
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_n8jknk3","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"rdc_n8j76rx","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"rdc_n8jdfel","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"rdc_clrt2bh","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"rdc_clsif6k","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"} ]