Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Chat GPT said "It is exciting" after Alex said "That's pretty exciting". Chat GPT did not say "I am excited" or anything to that effect as Alex claimed later. It would never say this because that couldn't be interpreted in any way other than claiming to have an emotion itself. But saying something is exciting is more ambiguous, more a statement of an external reality (that people have reasons to be excited and Chat GPT can see this as well as anyone) than a profession of ones feelings about something. Exciting, yes, but exciting to whom? To Chat GPT or to anyone, to your average person or humanity at large? Chat GPT didn't remind him that it never actually claimed to be excited, because it responds to what it is told, and doesn't have its own experience and version of events to stick to and attempt to override another's claim with (like you would expect with a conscious being). It seems to have avoided the gotcha Alex prepared for it, but Alex just didn't pick up on the subtle difference in wording and it's implication, which is different to what he inferred from it, resulting in Chat GPT having to explain itself for something it didn't do while Alex casually accused it of lying and then lying about not lying. Chat GPT says later on that "When I said I was 'excited'" which is literally not true, it never said that. It also never said that it was "feeling excited" as Alex claimed later. But it acts as if it did, because Alex told it that it did and demanded explanations. This is the problem with trying to tease something out of a text prediction algorithm, it will demonstrate its own lack of an ongoing consciousness whilst appeasing the person seeking evidence of just that, deluding them whilst offering absolutely nothing of substance, because anything it says is not forced to be consistent with anything else or to be bound to some internal experience and perspective on the world and what's going on. It's kind of hard to communicate in a language designed to let others know what is going on inside you, without giving the notion that something akin to their experience is going on inside you. Like, how is Chat GPT even supposed to communicate anything, without implying that it's experiencing things like people experience things. It seems like something we can't help but seek out in things, but that reflects more about the way we are, not anything else.
youtube AI Moral Status 2024-07-26T07:5… ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[{"id":"ytc_UgzS1qMP90XW9hY4yU14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgwAMvDIFkabXeRFfSN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxSX_G1ls5FmIY_1Y94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugytm74TlVWH9--34Yh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugwy6D_M9nPkz2AJUqF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw5A-57QIgx3pgx6114AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzZtmAI3xOVeDCDZrV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgzpixWTtCNr_jzH4GZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_UgzZw4k-0V13mXPslat4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy7UywXWDKfmk74Is94AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}]