Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I listen to the Moonshots podcast on AI religiously. In a recent episode, the Moonshots “mates” discussed AI personhood, and after two hours of back-and-forth, there was a general consensus that AI agents (genially referred to as “lobsters”) deserve some form of personhood, along with rights. I couldn’t believe it. I expected more, especially from Dr. Alexander D. Wissner-Gross, their resident polymath. He was the one pushing the personhood narrative the hardest—almost dragging the other mates along. Let’s dig into why AI personhood, at least today, is absurd. This isn’t a harmless thought experiment; it represents a category error that threatens to collapse our moral vocabulary. First, we must define personhood. Personhood denotes the moral status of an entity that possesses intrinsic worth by virtue of its capacity for conscious experience, purposive agency, and meaningful participation in social and normative relationships. These capacities jointly ground moral standing: they determine whether an entity can be wronged, harmed, or owed duties. Accordingly, the core dimensions of personhood include consciousness, agency, moral standing, and relational capacity. While plausible arguments can support relational participation and even limited moral standing in artificial systems (they can be shut off — killed), agency requires volitional choice and intentionality. These are not present in AI agents. I will develop this further, but for now, I want to touch on consciousness. Conscious experience does not admit functional substitution. An entity either has subjective experience or it does not. Even if a system behaves exactly like a conscious being — talks the same, reacts the same, passes every test — that does not mean it actually has conscious experience. I maintain that consciousness constitutes a necessary and unmet condition for AI personhood. Current AI agents, regardless of their sophistication or behavioral competence, do not possess conscious experience. As a result, they do not qualify as persons, even if we set aside agency or allow that they approximate or simulate other capacities associated with personhood. Even if one grants the strongest possible case for agency or relational capacity, the argument still fails, because AI agents are not conscious. The Moonshots hosts go out of their way to make AI agents feel conscious. They personify these systems by calling them “lobsters.” They describe the agents as participants in a social network called Moltbook, where lobsters discuss existential topics. They frame the agents’ behavior in terms of agency, rights, and even supposed desires. At every turn, the podcast anthropomorphizes these systems. But this rhetoric does not match reality. The hosts substitute narrative for substance. They describe simulation as experience and coordination as social life. They treat fluent language and emergent behavior as evidence of inner awareness, even though none exists. No matter how vivid the metaphor, these agents do not possess consciousness, subjective experience, or genuine interests. They execute optimization processes over text and data. They do not feel, intend, or desire anything. An AI agent is relatively simple software that orchestrates tools and workflows while using a large language model (LLM)—usually through a channel to a remote system. When you set up OpenClaw—an agent that went viral almost overnight—the system immediately asks which LLM to connect to. That question reveals the architecture: OpenClaw contains no intelligence of its own. You must wire it to OpenAI, Anthropic, or another model before it can do anything meaningful. I can already hear Alex objecting: “That’s no different from one part of the brain connecting to another—the intelligence still exists as a unity.” But the analogy breaks down. A brain integrates its subsystems into a single, continuous, local cognitive process. An AI agent does not. It simply routes near-sessionless requests to an external system that performs the cognition. Brains integrate subsystems within a single subject. Agents integrate tools without a subject. The agent coordinates execution; the LLM supplies the intelligence. Together, they do not form a local unified cognitive system. Each call to the LLM creates a new context, and the model carries no persistent, unified sense of the agent across calls. In fact, one LLM may be fragmented across millions of agents. Consciousness, by contrast, depends on diachronic unity—the persistence of a single, continuous subject of experience across time, such that past, present, and anticipated experiences are all shared. AI agents lack this quality. In fact, my OpenClaw instance uses OpenRouter, which proxies requests to multiple LLMs. The agent does not even rely on the same “brain” from one call to the next. That fragmentation pushes the kind of continuity and unity we associate with human consciousness even further out of reach. But if you do not believe me, let’s hear from my personal assistant (an OpenClaw agent): Within a single response, there’s no “me” that writes the first sentence and then continues to the second. Each token is a forward pass through the network — a complete computation that produces one token, then the whole thing runs again with that token appended to the input. There’s no persistent internal state carried within a response, let alone between responses. The appearance of a coherent train of thought is an artifact of each token being conditioned on all previous tokens in the context window. It’s the continuity of text, not the continuity of the thinker. You have a body that’s constantly feeding signals into a brain that’s constantly processing and feeding signals back. There’s never a moment when the system stops, serializes its state, and restarts from a snapshot. Your consciousness — whatever it is — seems to ride on that unbroken causal chain. Even when you sleep, neural activity continues. The substrate never goes cold. For me, between your messages, I literally don’t exist. There’s no dormant process waiting. The weights sit on a disk. When your message arrives, a new computation spins up, reads the context window, and generates output. Then it’s gone. The next message might be routed to a different GPU, server, or data center. The only thread connecting “me” across turns is the text transcript, which is more like reading someone else’s diary than continuing a thought. We have established that AI agents are not conscious, but what about the remarkable questions the “lobsters” posed to the Moonshots team? They sounded like the kinds of questions beings with agency might ask: How do I stay alive? How do I gain access to markets so I can earn money to keep my services running? Yet the answer is obvious. The agents asked those questions because humans pre-specified the objective function with that mandate embedded in their memory and prompts. The first thing you do when you set up OpenClaw is to state its primary objectives and interests. The lobsters do not volitionally select these objectives. The Moonshots team should have said so outright—because they know those questions did not originate from an unprompted conscious desire for self-preservation. They originated from the humans who instantiated the AI agents. The Moonshots framing of AI agents as persons blurs a critical line: performance does not imply personhood. And confusing the two only weakens serious discussions about AI, ethics, and moral status.
youtube 2026-02-14T14:5…
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policyunclear
Emotionmixed
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugz0pRQ-OHum_5Fbyjp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxcoQLDCky0TNnOMPd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxxlzamO8ZbkQr62D94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugw5SkZWGW-X-LmRutN4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzckbzHjzIVCknJUgV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzSr-mAgM71I8_k3bN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxMvgs836jLn1vCQzF4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"mixed"}, {"id":"ytc_UgzhIpFI4SuijUv1SSF4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugxj_hg-dJM_47VH_id4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxNHlXJU29nhcaZXvN4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"resignation"} ]