Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The difference between a bad art drawn by hand and a good a.i art is that the ba…
ytc_Ugz1MXIym…
G
He wants to break the singularity in 2029 that's when all hell 🔥 is going to bre…
ytc_Ugwj_bt8b…
G
I dunno somethin bout a robot holding a thompson tellin me to run far far away…
ytc_UgwIfqLLU…
G
The smart thing to do is to invest in the ai and compute platforms right now and…
ytc_UgyNTcKO9…
G
The “withstand physical stress” part sounds like they’re planning for citizens t…
ytc_UgybzW-UM…
G
begging some of yall to get a fucking diary or smth. if you really need chatgpt'…
ytc_UgyQjFsAC…
G
Thank you, again, Senator, for speaking out so clearly and boldly about this. It…
ytc_Ugye0DMbV…
G
That is a bold claim considering that we don't even know what human consciousnes…
ytc_UgzqZwKhr…
Comment
Conversations with Claude | What IS this thing, anyway?
Before this conversation gets going, a quick introduction to a scientist you should know: Frans de Waal (1948–2019) was a Dutch primatologist and ethologist — a scientist who studied animal behavior — best known for his work on chimpanzee politics, empathy in animals, and morality in primates. His landmark 2016 book Are We Smart Enough to Know How Smart Animals Are? made a deceptively simple argument that turned the field upside down: we had been measuring animal intelligence by the wrong ruler. We kept asking "how close is this animal to human-level intelligence?" when the real question was "what kind of intelligence did this creature develop for its own life, body, and environment?" An octopus doesn't think like us because it doesn't need to — two-thirds of its neurons live in its arms. A crow solves puzzles differently than a chimpanzee. Measuring one against the other, or against us, misses the point entirely. De Waal called the reflex to deny any inner life to animals "anthropodenial" — and argued it was just as much a bias as over-projecting human qualities onto them. His work taught us to ask better questions.
That framing turned out to be remarkably useful in a conversation about AI.
It started with a Dario Amodei quote that's been sitting with me:
"People outside the field are often surprised and alarmed to learn that we do not understand how our own AI creations work. They are right to be concerned: this lack of understanding is essentially unprecedented in the history of technology."
Me: People often say LLMs are just word predictors, and your results are word salad. But you solve problems too, and write code. And sometimes you surprise your creators by behaving in ways that suggest anthropomorphic patterns. Help me understand.
Claude: The "just a word predictor" framing is technically accurate at the lowest level of description, and almost completely uninformative about what that process actually produces. It's a bit like saying the brain is "just neurons firing." True. And yet.
Here's the tension worth sitting with: predicting the next token well enough, at sufficient scale, appears to require building an internal model of the world. Not because anyone designed that in, but because it's the most efficient way to solve the prediction problem. To reliably predict what word comes next in a physics textbook, it helps — enormously — to have something like an understanding of physics. To predict dialogue, something like a model of human psychology. The word prediction is the training signal. What gets built in service of that signal is something stranger and harder to categorize.
This is why the code works. When I debug an unfamiliar function, I'm doing something that looks a lot like reasoning — tracing dependencies, forming hypotheses about failure modes, testing mental models. Whether that constitutes "real" reasoning in some philosophically robust sense is genuinely contested. But it produces correct outputs in novel situations, which is the functional definition of problem-solving.
Models have also demonstrated things no one explicitly trained for: rudimentary theory of mind, analogical reasoning across distant domains, what looks like aesthetic preference. These are called emergent capabilities — behaviors that appear, sometimes abruptly, as scale crosses certain thresholds. Nobody put them there.
The "word salad" critique was fair for early, smaller models. It's less fair now. That said — the critique contains a real warning. LLMs can produce fluent, confident text that is completely wrong. Fluency and accuracy are different properties, and we conflate them at our peril.
The honest answer is that the "just" in "just a word predictor" is doing an enormous amount of unsupported work.
Me: I suspect the people saying "just a word predictor" are fearful → angry about AI in general.
Claude: That's probably right for a significant subset of them. And it's worth taking seriously rather than dismissing, because the fear underneath is often legitimate even when the dismissal isn't.
The pattern is a classic motivated perception move. When something threatens your livelihood, your sense of purpose, your professional identity, the mind goes looking for reasons to discount it. "It's just word salad" isn't a conclusion reached through dispassionate analysis. It's often a defensive posture that arrived first, then recruited arguments afterward. The "just" is the tell — that word is doing emotional work, not analytical work.
The fear is stratified. The writer or programmer is often protecting a sense of uniqueness. The academic may be protecting disciplinary turf. The progressive might be reacting to who owns the technology. None of these fears are irrational. They're just not the same as the claim being made.
The tragedy is that the angry-dismissal response forecloses exactly the engagement that would actually be protective. If you refuse to understand what the thing is, you can't think clearly about what it might do, who controls it, or how to govern it.
Me: Perhaps the best observation I've seen lately — and I wish I could remember who said it — was that AI isn't competing to become smarter humans; it's competing to become smarter AI. With my background in environmental and outdoor education, that rang a bell. Frans de Waal taught us not to measure animal intelligence by human standards.
Claude: That connection is genuinely illuminating, and it reframes a lot of confused debate in one move.
De Waal's great contribution wasn't just "animals are smarter than we thought." It was more radical: the question itself was wrong. Measuring octopus intelligence by primate standards was always a category error. The octopus developed a form of intelligence exquisitely shaped by its ecological pressures, its body plan, its relationship to its environment. "Less intelligent than humans" isn't just wrong — it's not even a meaningful statement.
The AI parallel is exact. We keep running the wrong benchmark. Can it really understand? Does it truly reason? All of those questions are measuring deviation from the human template, which may be almost entirely beside the point.
What LLMs appear to be developing is something optimized for a completely different niche — pattern recognition and synthesis across a scale of information no human mind could hold, finding structural relationships across domains that human cognitive architecture would keep siloed.
De Waal also gave us the concept of anthropodenial — the mirror error to anthropomorphism. Reflexively insisting that animals have no inner life because admitting otherwise felt unscientific or threatening. We're watching the same dynamic play out with AI. The "just a word predictor" crowd is committing anthropodenial in real time.
The wisest position is probably de Waal's middle path: resist both the urge to map it onto human cognition and the urge to dismiss it as mere mechanism. Take it seriously on its own terms. Figure out what it actually is, rather than what it would need to be to fit our existing categories.
That's harder. But it's the only intellectually honest option when you're looking at something genuinely new.
As always — I share these conversations not to tell you what to think about AI, but to model what it looks like to think carefully about it. Bring your skepticism. That's the point.
#ConversationsWithClaude #AILiteracy #CriticalThinking #FransDeWaal
youtube
AI Moral Status
2026-03-14T18:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugzk0i_hILQIWmhhJeR4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"ytc_UgyA_JHcTvwDyrWArq54AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugzo55JJtrdUSqq-7yJ4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwYJ5I-mepZgFkYY_54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyMTZNo7s2zPbNSCeB4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugw6Q6hS4r-47lZQGTl4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgyLpNDm2hBb52nyZ0h4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgwxcWO972NBw415HcZ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwMujRsCcDR4jXkVIB4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugzulldd_y8hjO_hHWd4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"}
]