Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Poor lex doesn't even comprehend the negative psychological and behavioral impac…
ytc_Ugz8ICTkL…
G
Why didn't the son talk to his parents? Why didn't he feel like he could talk to…
ytc_Ugx9r3zfU…
G
If consciousness is exclusively tied to quantum microtubule processes in the hum…
ytr_UgwAd2Y6C…
G
@travisgoesthere „You likely dont even understand how AI works and”
Quote one o…
ytr_UgyPKCGgS…
G
Its Funny how you get Scared of Chatbots These aint chatbots,These are pre Recor…
ytr_UgzAJ8Mzw…
G
Debunking the Myth of "Jailbroken" AI Models
Jailbreak
Hey Reddit,
I've see…
ytc_UgxC4BhaI…
G
Well, all you have to do is continue to tell AI where it messed up and it will e…
ytc_UgyqlK_4w…
G
Well yeah they only have to make the perfect AI once and boom 90-99% of the curr…
ytc_UgydSQRKC…
Comment
I think the biggest gaping hole in the concept of the Chinese Room Experiment is that there is some "gotcha" achieved by saying that the man performing the role of processing the analog programming "doesn't know a word of Chinese" and therefore... What? That means the programming that is producing Chinese answers to Chinese input is functionally different from a mind? An individual neuron doesn't "know" anything either. Your braincells don't know what words are or what food tastes like, or anything else, all they do is convey nerve impulses in a complex pattern that produces behavior, one of those behaviors being consciousness.
Let's say it is possible to boil down all the functions of a single neuron into a list of simple instructions. I don't think it is a stretch to say that nothing magical goes on at a cellular level, and that with enough time, all of these functions could be performed by a single person (receiving a message, sending a message, etc.). Then, we get 86 billion other people together in one giant room, each of them performing the role of one neuron. Assuming you had access to a snapshot of an individual person's brain, a perfect map of every existing neural connection, you could assign those connections to your neuron simulators. This is the ultimate extrapolation of the Chinese Room Experiment, and it demonstrates that the scenario described is not functionally removed from the reality of concious thought-- that it is inherently built on the actions of unknowing, unthinking constituent parts.
I'm not trying to argue that a natural language learning algorithm is sentient. Natural, organic neural networks have, at the very least, several structures like feedback loops that allow for creative thought. Consciousness, sentience, whatever you want to call it, probably does not require language processing to exist, just as it doesn't require image processing, so it stands to reason that language processing can exist on its own without a thinking mind d
reddit
AI Governance
1676270994.0
♥ 8
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_j8bc1ta","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"rdc_j8awh01","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"rdc_j8b0aw1","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"rdc_j8ce8ur","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"rdc_j8b6oti","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}
]