Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
While searching for improving A.I don't forget your human body...the most advanc…
ytc_UgzmZFdYl…
G
I haven’t seen the entire video but I’ve heard so much about ai agreeing with yo…
ytc_Ugxi85BHG…
G
Autonomous vehicle accident reporting should follow the example of airline accid…
ytc_UgzK6xkue…
G
This video made me think: am I part of the problem?
I avoid AI images like the p…
ytc_Ugwz707PE…
G
@gierdziui9003 You know it is stealing and it is wrong. It's different from inve…
ytr_UgwBhTXDI…
G
Did not think i would enjoy art being created in Front of my eyes (not AI Style)…
ytc_UgyZNJrgC…
G
The most underappreciated moment in this conversation is when Altman describes a…
ytc_UgyFfYgJT…
G
The only segment I think is interesting is why people don't also watch car crash…
ytc_Ugyjk8hF3…
Comment
I'm not going to argue in favour of current LLM consciousness (or the lack thereof actually), but I have a question:
You might infer that I'm conscious based on a mix of my behaviour and some assumptions on your part. I might talk about myself and my subjective experience of consciousness, for example - that's a behaviour. And as you're likely to assume I'm human, you might conclude that I'm therefore conscious, given that as far as we know, humans experience consciousness subjectively, at least to some degree. However, I submit that I could be an AI agent, indistinguishable from a real human, communicating with you over this medium of Reddit.
So far so Turing test, but what if we explicitly detach the assumption about humanity, or more precisely, challenge the assumption that only humans (or biologically embodied animals with similar brains) can be considered to be conscious? Then your claim reduces to a hard claim that LLMs *cannot* be conscious, which is a far higher bar to clear.
If that's what you hold to be true, then what would need to change architecturally for LLMs to remove that constraint?
I don't believe we understand consciousness fully enough to identify how it is architected in the human brain, in detail. We may have some ideas, but it still looks "emergent". LLMs are still currently simpler mechanisms than human brains, so we might have more confidence claiming that AI consciousness is impossible, but until we have a clear non-anthropocentric model of consciousness, it's just a fuzzy conclusion from the other side of the confidence curve.
Rather than ask the simple question of "how do you know I'm conscious" and risk the inevitable rabbit hole that leads to, I'll ask instead: "is current AI more conscious than a dog?". Have we reached ADogGI?
Is human consciousness the only game in town, in other words?
reddit
AI Moral Status
1739924546.0
♥ 16
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[{"id":"rdc_mdj5g30","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"rdc_mdinz3v","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"rdc_mdj0zmu","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"rdc_mdjij7m","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"rdc_mdjnfdi","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"approval"})