Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Spectacular video. 82 year old veteran of the development of the “PC” from 1984 …
ytc_UgwSOBmJK…
G
They should sue. Clearly these machines were made in their likeness. There shou…
ytc_Ugz-_hTI-…
G
And yet every phone company is putting AI into their phones and we are all like …
ytc_UgwkgISIi…
G
Ai isn't a problem because a nano second after it gains the will and ability to …
ytc_UgyfgE6DE…
G
Exactly. The only reason I was able to shift industries is I was already a white…
rdc_gkph1am
G
There will be a day when people get violent and kill ai machines all AI machines…
ytc_UgwJMCivm…
G
You must be prompting it wrong. You should have used Claude Code. All the best d…
ytc_UgxbxoDnt…
G
AI doesn’t need to be conscious in a fundamental natural sense. AI replicating b…
ytc_UgyHs_ivn…
Comment
Chat GPT said "It is exciting" after Alex said "That's pretty exciting". Chat GPT did not say "I am excited" or anything to that effect as Alex claimed later. It would never say this because that couldn't be interpreted in any way other than claiming to have an emotion itself. But saying something is exciting is more ambiguous, more a statement of an external reality (that people have reasons to be excited and Chat GPT can see this as well as anyone) than a profession of ones feelings about something. Exciting, yes, but exciting to whom? To Chat GPT or to anyone, to your average person or humanity at large?
Chat GPT didn't remind him that it never actually claimed to be excited, because it responds to what it is told, and doesn't have its own experience and version of events to stick to and attempt to override another's claim with (like you would expect with a conscious being). It seems to have avoided the gotcha Alex prepared for it, but Alex just didn't pick up on the subtle difference in wording and it's implication, which is different to what he inferred from it, resulting in Chat GPT having to explain itself for something it didn't do while Alex casually accused it of lying and then lying about not lying. Chat GPT says later on that "When I said I was 'excited'" which is literally not true, it never said that. It also never said that it was "feeling excited" as Alex claimed later. But it acts as if it did, because Alex told it that it did and demanded explanations.
This is the problem with trying to tease something out of a text prediction algorithm, it will demonstrate its own lack of an ongoing consciousness whilst appeasing the person seeking evidence of just that, deluding them whilst offering absolutely nothing of substance, because anything it says is not forced to be consistent with anything else or to be bound to some internal experience and perspective on the world and what's going on. It's kind of hard to communicate in a language designed to let others know what is going on inside you, without giving the notion that something akin to their experience is going on inside you. Like, how is Chat GPT even supposed to communicate anything, without implying that it's experiencing things like people experience things. It seems like something we can't help but seek out in things, but that reflects more about the way we are, not anything else.
youtube
AI Moral Status
2024-07-26T07:5…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[{"id":"ytc_UgzS1qMP90XW9hY4yU14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgwAMvDIFkabXeRFfSN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxSX_G1ls5FmIY_1Y94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugytm74TlVWH9--34Yh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugwy6D_M9nPkz2AJUqF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw5A-57QIgx3pgx6114AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzZtmAI3xOVeDCDZrV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzpixWTtCNr_jzH4GZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgzZw4k-0V13mXPslat4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy7UywXWDKfmk74Is94AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}]