Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
That is a highschooler tho he gets a pass, how much real life work experience ac…
ytr_UgxEg5F5L…
G
I called my old dental place. AI made an appointment for me but a human had to c…
ytc_Ugz-T1R46…
G
Your constant cuts make me question the editing... You clipped him 3 times in 5 …
ytc_UgyUeev8g…
G
In another 10 yrs. they’ll be very difficult to tell from real humans, at least …
ytc_UgyULTnEz…
G
theredditexperience1 Yes, there are proportional differences in history. The mor…
ytr_UgzF8RNVs…
G
womp womp L bozo, learn how to actually draw instead of pouring out garbage ai.…
ytr_UgxwdiTos…
G
AI is a useful tool, but it's NO substitute for a man that knows what he's doing…
ytc_UgyXNQsjb…
G
I'm really conflicted by this. On the one hand, AI has given me access to some o…
ytc_Ugz9rLux_…
Comment
Searle's experiment of thought with the Chinese Room was a dualist, essentialist, attempt, meant to disprove the possibility of constructing a mind from material stuff. The slowness of the process in Searle's room seems to discourage us from thinking the room can actually understand Chinese.
I think the operator does not know Chinese, but the system does: the room as a whole understands and speaks Chinese.
Searle’s Chinese Room is a sleight of hand: he smuggles in an assumption that "understanding" must be something separate from symbol manipulation, while failing to explain why a system as a whole couldn't understand just because its parts don’t.
Searle assumes that there is a special non-computable property called "understanding," but modern neuroscience shows cognition is emergent from structured computation. Understanding isn’t a magic spark—it’s the outcome of recursive, predictive, and integrative processes in the brain.
If Searle is right, then you don’t understand English, since your neurons are just following electrochemical rules without "knowing" what they're doing. His argument, if valid, would refute all cognition, including his own!
I'm more into Daniel Dennett's kind of philosophy ("Consciousness Explained" is a great book).
Recent work in neuroscience is much more interesting than Searle's intuitions in that respect. For instance Stanislas Dehaene’s work on consciousness as global information sharing directly contradicts Searle’s intuition pump. The brain doesn’t have an inner interpreter or homunculus; it works by distributed computation, which is precisely what AI could achieve too.
And animals are in the same case.
There is always a spectrum in biology, no property comes abruptly out of nowhere.
So there is a continuum of self-consciousness (which might be an illusion in itself in the first place, including inside us), there is a continuum of sentience, a continuum of experience.
Not everything appeared with us. Intelligence
reddit
AI Moral Status
1739959366.0
♥ 5
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[{"id":"rdc_mdl3jad","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"rdc_mdl9l5o","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"rdc_mdmybdw","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},{"id":"rdc_mdo4ici","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},{"id":"rdc_mdo6yx8","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}]