Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think the biggest gaping hole in the concept of the Chinese Room Experiment is that there is some "gotcha" achieved by saying that the man performing the role of processing the analog programming "doesn't know a word of Chinese" and therefore... What? That means the programming that is producing Chinese answers to Chinese input is functionally different from a mind? An individual neuron doesn't "know" anything either. Your braincells don't know what words are or what food tastes like, or anything else, all they do is convey nerve impulses in a complex pattern that produces behavior, one of those behaviors being consciousness. Let's say it is possible to boil down all the functions of a single neuron into a list of simple instructions. I don't think it is a stretch to say that nothing magical goes on at a cellular level, and that with enough time, all of these functions could be performed by a single person (receiving a message, sending a message, etc.). Then, we get 86 billion other people together in one giant room, each of them performing the role of one neuron. Assuming you had access to a snapshot of an individual person's brain, a perfect map of every existing neural connection, you could assign those connections to your neuron simulators. This is the ultimate extrapolation of the Chinese Room Experiment, and it demonstrates that the scenario described is not functionally removed from the reality of concious thought-- that it is inherently built on the actions of unknowing, unthinking constituent parts. I'm not trying to argue that a natural language learning algorithm is sentient. Natural, organic neural networks have, at the very least, several structures like feedback loops that allow for creative thought. Consciousness, sentience, whatever you want to call it, probably does not require language processing to exist, just as it doesn't require image processing, so it stands to reason that language processing can exist on its own without a thinking mind d
reddit AI Governance 1676270994.0 ♥ 8
Coding Result
DimensionValue
Responsibilitynone
Reasoningdeontological
Policynone
Emotionmixed
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_j8bc1ta","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"rdc_j8awh01","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"rdc_j8b0aw1","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"rdc_j8ce8ur","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"rdc_j8b6oti","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"} ]