Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Alright Hank, I love ya, but I don't know how much of this I can watch. I'm only…
ytc_Ugy0JaoEx…
G
It must be such an easy life to be an AI ‘Artist’ and get paid for doing LITERAL…
ytc_Ugy3T0Ivf…
G
finally someone mentioned how most of the things we call AI is just LLMs. Also I…
ytc_Ugw2NIUrM…
G
Oh it's definitely theft. You're not making new art youre stealing IP. Literally…
ytc_Ugwyv8yfh…
G
haha...is the ai acting different? or have trained it to act different when we t…
ytc_UgzJAO6Gf…
G
We don't directly steal or replicate. We choose, get inspired, and make it withi…
ytr_Ugz3eqV1R…
G
You’d better listen to Lecun rather than Musk or Zukerberg that roughly know not…
ytc_Ugya6jtGw…
G
I'm sorry but I can't stand this argument. AI is unethical and terrible even if …
ytc_Ugxk0BZr6…
Comment
On “near-universal consensus”—I think you’re reading the claim differently than it’s intended. The essay isn’t saying all humans *are treated as* full agents in practice. Obviously they aren’t; that’s the history of slavery, disenfranchisement, and every other form of political exclusion. The claim is that in contemporary moral philosophy, the position that all cognitively typical adult humans possess agency is about as close to consensus as the field gets. That’s a claim about the state of the discourse, not about how societies actually behave. The gap between those two things is real and important, but it doesn’t make the philosophical consensus less real.
On the Semiotic Problem—the essay’s examples (robot, sparkle, Shoggoth) aren’t meant as an exhaustive taxonomy of AI representation. They’re illustrations of a structural claim: that our dominant representations of AI encode assumptions about moral status that foreclose inquiry before it begins. You’re right that the distinction between hard-coded robots and self-modifying code matters, and that not all machines pose the same questions about agency. But that’s consistent with the essay’s argument rather than contrary to it—the Semiotic Problem is precisely the claim that collapsing these distinctions (as the “robot” image does) prevents us from asking the right questions.
Your point on intellectual property is interesting and something I haven’t addressed. The legal reality that AI systems are property does sit in tension with any framework that might accord them moral status, and that tension would need serious treatment in future work. On the LLM point—I used Claude as a drafting collaborator, which I don’t think is a faux pas so much as a disclosure worth making. The arguments are mine; the drafting process involved iteration with an LLM. I don’t think that undermines the substance, but I understand why it gives some readers pause.
reddit
AI Moral Status
1775283696.0
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[{"id":"rdc_oe4apgm","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"indifference"},{"id":"rdc_oe1c25i","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"outrage"},{"id":"rdc_oe7mbdf","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"indifference"},{"id":"rdc_oe7rqc3","responsibility":"unclear","reasoning":"mixed","policy":"industry_self","emotion":"approval"},{"id":"rdc_oe1ivlw","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"indifference"}]