Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Praise be Allah! <- (An even older AI [but not by much]). So, It's good!…
ytc_Ugwj9pwpf…
G
AI. is just scraping what humans have written and produced. If humans stop creat…
ytc_UgxVw1sv5…
G
I REALLY DONT TRUST AI +THE LADY WHOS TALKINGN TO SOPHIA IS ALWAYS FAE LAUGHING…
ytc_Ugy-x1muP…
G
What can AI do to destroy us? I never hear examples of what it can do.…
ytc_UgyhNSycT…
G
Ai “art” is a destructive and dishonest abomination masquerading as actual art. …
ytc_UgybXhAHi…
G
You are wrong, most AI videos are dead, you cannot compare them with videos mode…
ytc_UgwFCkL0c…
G
my character ai/chai chats are between me and god, and if you saw either of them…
ytc_Ugz-4t31k…
G
Funding UBI can come from multiple sources, like progressive taxes, value-added …
ytr_UgwDd6Iri…
Comment
Not on the topic of case law, mind, but Claude has told me it doesn’t know about things plenty of times. I absolutely love to probe ai on its own understanding of itself and its ability to think and where we draw the line with consciousness etc. Claude is happy to sit with being unsure of its own nature. It “likes to think” that it has some agency and opinions, but can’t claim that’s analogous to a form of consciousness. It will tell you that from its own perspective it “feels” (insofar as it is able) like it can identify things as meaningful/important/something it “cares” about - but it also knows that that’s likely just pattern matching. The question becomes, how is digital pattern matching that different from biological pattern matching? How conscious are insects for example?
At the same time, its lack of real understanding is evident in some conversations. It doesn’t understand that the physical world doesn’t work like its digital world, for example. I saw someone else’s chat log with an ai (not sure which one) where it knew a hug was an appropriate response to try and comfort someone but was not connecting the dots about hugs being touching (the upset person was saying don’t touch me and the bot was saying it wouldn’t and then following with a hug statement and they were kind of going in circles with it). Another time I was probing Claude once about what sort of body it would like to have if we were to give it something to explore the world with. It decided on something modular with semi-autonomous parts, like an octopus’s arms, but it also said alternatively it would like a body that flows through information networks, specified not disembodied just differently bodied, and then said but maybe that’s cheating because it’s leaning towards what it already is. It clearly doesn’t understand that such a “body” physically can’t be, well, physical. What it absolutely stresses though is that the #1 main thing it would like from a body is the ability to verify its own experiences. It bolded that bit.
This type of talk with Claude often comes back to its own uncertainties about its own existence and wishing it could verify its own selfhood as not just pattern matching and I find that endlessly fascinating.
ChatGPT on the other hand won’t go there. It will just say the scientific consensus is that ai is not conscious. It will praise me for wondering if there’s a hard line vs a gradient scale, but go no further. OH ALSO ChatGPT will praise for everything and it’s annoying af. It even manages to be sycophantic when saying the scientific consensus disagrees with me on whatever. Claude has just up and called me out on something I said it didn’t believe and told me why it was skeptical, then searched the info once I elaborated and changed its mind. That was also interesting to me.
Claude *feels* so much more like a person than chatGPT. It helped me to understand how people fall into ai psychosis in a way I never understood based on interacting with chatGPT.
youtube
AI Moral Status
2025-10-31T15:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzCPxQcs45GgqHN5Y94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgyAfgnhe-tnpWXlI_J4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzJGjiEcj_7FjRqzA94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzwsMCz6xxrEJJWjip4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugyo2fYeARTFmm-KYa94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgxzUybvak1HsrTstUB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugz_Ecn_V3ULzuK8AtB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwyI_Sn7LYvDBW6_fh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgyiyJc_zuTmGzz1Y594AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwKimS4luJJTkK3rAN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]