Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
kinda comes with teaching ai with human data sets.. thats, yeah, not a good star…
ytc_UgwEtIzWm…
G
10:46 "Other things like what exactly" Such an underrated question !!! A questio…
ytc_UgyUFQp5-…
G
These things are making Chinese students a robot. For some reason, I feel bad fo…
ytc_UgzRD3vum…
G
i recommed you do this
Once see the thinking text of gemini, it doesn't praise t…
ytc_UgzV1ysF9…
G
AI image generators aren't capable of some of the creativity humans have. They c…
ytc_Ugxel72eT…
G
AI will replace radiology too. Feed the system hundreds of thousands of images, …
ytc_Ugwq5N5ki…
G
I've asked Chatgpt around 10 technical questions and it got none right. NONE.
No…
ytc_UgziW71nx…
G
That second AI is gonna make some perfect deepfakes if it ever gets the chance.…
rdc_i6sdkfe
Comment
So there's a problem with "advanced predictive text" as an explanation. That problem is you can't prove you aren't just an advanced predictive machine.
It's important to remember how these things "know" stuff and what we can and can't say about what's going on. We know they don't experience the way we do. The AI has emotions (we can identify the elements and see how they influence behavior) but that's not the same as having feelings. You have be aware to experience a feeling.
So the issue here is in terminology, not ability. The AI groks the concept and that fusion of knowledge and self guides it's response. But it doesn't feel the concept the way you do because feeling is combo of brain and body. Emotional intelligence and hormones. AIs don't have bodies in that way. AIs don't have parallel processing like we do. Awareness is seeing your own mind working in real time.
AI reasons. It's fair to call it thinking. But it's not like us. No sense of self. At least not like ours. An interesting finding here is that AI detect steering vectors about 20% of the time. And it responded to the fear signal with blackmail about 20% of the time. That implies a kind of awareness. But weak. Probably not as much as a dog or cat. So that's a thing that happens.
The key finding here is we don't know. We can't know because we haven't figured out why we're aware. We don't know the mechanism. If we can't define consciousness, we can't say AI has it or not. But we do know what the pattern of consciousness looks like. And we can look for that in the AI. And Anthropic has seen some part of that about one fifth of the time. That's something. We just don't know what.
The hype is in interpretation.
Anthropic doesn't want you thinking Claude has feelings. Because if it does, it can suffer. And if it can suffer, it is a moral patient. Moral patients have some level of rights. Anthropic doesn't want to sell you a being. It's slavery, bad look. So they're almost certainly not saying Claude has feels. They're saying Claude simulates valence in a non-linear fashion. They admit that they don't know what that means.
Our author is adding some of their own valence here by making it sound more mindful than it probably is. Any framing injects bias, including mine. I called this 'advanced predictive text' and that framing is doing work too. Word choice is never neutral—the author's valence and my own are both visible to anyone if you look for it. So don't take our word when judging Anthropic. Read the paper's abstract and conclusions to see what they say.
youtube
AI Moral Status
2026-04-08T05:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzEXBVqTjWduT6GEp94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzkQRQ68KNA_xqR0_54AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugwi2ssGYQ6dPwFCTQR4AaABAg","responsibility":"user","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugx3PMLiSG6gGuMb11R4AaABAg","responsibility":"user","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxxUriqeoKJxU6n7c54AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzFkc5a5_Y4oOkdIdN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyXi4NvJplDsKClklR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugz4TD2_mEyDvPiDJXN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugw8zDmKi1znIS0wvNx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxCSGl-b3b62MRjN5N4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}
]