Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This is a philosophical question. There’s no evidence that how humans “think” is different from LLMs. We don’t actually know what thinking is. But if we look at out model of the brain and it’s neurons, AI is very similar with artificial neural networks. We know humans have memory and take in new information and process it based on what they’ve seen before. That’s exactly what LLMs. So theoretically we have no basic to say for sure that LLM don’t think like humans Empirically, LLM have passed the Turing test and are pretty much outputting a lot of content indistinguishable from a human. So again there’s no basic to empirically make the conclusion. You assume humans are capable of independent thought and don’t just do probabilistic analysis. Again here there’s no evidence that is true. You say the sky is blue because someone fed you that information when you were little. From there you go to classify other blue things based on probability distributions from signals to your eye. Humans train on vast amounts of information we consume and every innovation has precursor or builds on top of something else. We use analogs and apriori probability to reason about new things. Philosophers have debated if we have free will at all.
reddit AI Moral Status 1750978483.0 ♥ 2
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionindifference
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_mzyehu7","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"rdc_mzysgni","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"rdc_mzyw68u","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"rdc_n006j61","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"rdc_n01x1s8","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"} ]