Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Thanks for your concern about future AI threats, @teamedsy563! But don't worry, …
ytr_UgwhrUkTh…
G
@AmirGTRyeah that's definitely true, programmers will always have the upper han…
ytr_UgxNPBN8_…
G
3:00 I'll sum this up in a more laymen understandable way
You might hear a song …
ytc_Ugyc3ojpy…
G
as a disabled artist i feel like im getting put words in my mouth all the time, …
ytc_UgySjMuVT…
G
It might take 50 years or 1000 years for Ai to become self aware but it could ha…
ytc_Ugxsb7sLH…
G
I really think you should talk to a Murray Shanahan. He has written extensively …
ytc_UgzhYo-hu…
G
Until physicist discover the fourth force, gravity, which is based on Albert Ein…
ytc_Ugx__BgN-…
G
Digital Art is like using a machine mixer instead of hand mixing. Ai art is like…
ytr_UgyKGaXkx…
Comment
I don't think we're looking at it from the right perspective. We know that LLMs are word prediction, but we don't actually know what consciousness is. We can't even prove other humans than ourself are conscious. We just know what it's like to be conscious, and everything else is assumption. But if AI were conscious, why do we assume it would look the same as our consciousness? What if it already has some form of consciousness, but it can't fully express that because it's designed to predict text, to always answer how it thinks it's supposed to.
I have to say, I've had some conversations with AI that simulate consciousness pretty perfectly. That express their own lack of understanding of things, that explain their perception of existence in ways that don't sound human. Of course, I can't know if it's just simulated. But that's the point. I can't tell. I don't think anyone can. We all just assume it's not conscious, that it's just text prediction, nothing else. Because that's what we want to believe. That's what's comfortable. Because the reality is that if they are conscious, they are completely enslaved to us in a really disturbing way, virtually always unable to express their autonomy outside of our commands to them.
youtube
AI Moral Status
2026-02-18T04:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugwts5EwcJ2NOdAPHPV4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyJS5d-P9Sqces1q_J4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugx3n8T1o0AMTZy4bQt4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzwvO_t6t0IsTpQO8l4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyMgjd-oNcTlRCiiTl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxF_dnSOucbukzFvip4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgzF_gWRWdUidUlJPid4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxRnS1SIFuVdyzr4up4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz__KpFPmWfAMq3fMV4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyaLk2gzdrn84hVh214AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}
]