Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Machine learning and neural networks and complex systems are intended to fool humans that they interact with a conscious entity, but that's just illusionary. Some people think that it will appear "by itself" as computing power grows, but conscious emergence is a very complex process which requires more than the multiplication of supercomputers, and I don't believe we have discovered anything close to human thinking in terms of AI. Until then, computers are just machines which perform what we program them to do, and nothing more. You can program a computer to say that it has a conscious, to give you that illusion, but it just doesn't have anything close to anything like a conscious and it doesn’t “think” by itself. John Searle argument "the Chinese room" illustrates clearly this point.
reddit AI Moral Status 1518451484.0 ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionindifference
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[{"id":"rdc_du4rapj","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"rdc_du4syjf","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"rdc_du5jhx1","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"rdc_du46h2b","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"rdc_du45k3r","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}]