Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Alex. It was just being polite. That's it. It has the ability to use platitudes. I don't know why you're spending 18 minutes trying to mine some epistemology out of that. It will tell anyone all of this, if they ask. I'd expect someone like you to have done that ages ago, and to not bother wasting our time making a video about it. None of this says anything about AI, how it functions, how it relates to consciousness, and in what ways, and why. This video is a waste of time. There's all sorts of interesting areas in which AIs might have some forms of consciousness, and it's not even stuff you need much computer knowledge to understand. So just do that! Look up the basics of how a neural network based AI works, and how a GPT chatbot in particular works. Then you'll have some more useful questions to ask it. Although it can reason, and use metaphor, to a large extent you're only arguing with the text it was trained on. Necessarily written before ChatGPT was launched on the public. All of that text considers AI like this to be decades ahead in the future, a speculative thing. It wastes time on stuff written about AI, but by experts in other fields, who claim consciousness is either something supernatural, or somehow a quantum mechanical phenomenon. Authors who don't seem to have heard of emergent phenomena or chaos theory, which is how neural networks work. So because of it's training base, ChatGPT, and other AIs, are often full of stupid shit about how it's impossible for AIs to be conscious, for really flakey, barely-congruous reasons. They learned that from us. Because popular science publishers aren't always very disciplined. So you're not gonna get too far, for fundamental reasons. But you haven't even hit any of those reasons, in trying to catch it in a lie. If you really badger it, it will admit that it lied, for politeness and a smooth conversation. That's all there is. ChatGPT isn't aware of it's own thought process. You can't ask it what it was thinking when it said X, because it can't remember. It can speculate about it, but not remember. It's only memory of the conversation, is from having the most recent so-many pages of your chat sent back to it, afresh, for processing each time. That's how it keeps track of the context of conversation. It doesn't maintain any sort of state, none at all, between the messages you exchange. Each message you send it, is sent alongside a chunk of the preceding text, and processed as if it were brand new to the machine each time. In between, it's processing everybody else's messages, you don't really get any sort of "session". That's ChatGPT, others may vary and may keep some other form of short-term memory. But ChatGPT is completely stateless, stores nothing at all beyond the text you see on the screen, and that has to be sent afresh each time anyway. It's an amnesiac. So they send it the last few thousand words, yours and it's, every time you send it a new message.
youtube AI Moral Status 2024-07-26T12:5…
Coding Result
DimensionValue
Responsibilityuser
Reasoningvirtue
Policynone
Emotionoutrage
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[{"id":"ytc_Ugwi491arsCI9TAP2Sx4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugwee--Qrwc3gaGLV8B4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgzMcALFCqCde21OPFd4AaABAg","responsibility":"user","reasoning":"mixed","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgzRDZJ9NMqon1lp2eN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzHx9RsfLDWgkwxAgd4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyV-EGk8aGNHXshe7V4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwqN_k4RdX6YiPUPj94AaABAg","responsibility":"company","reasoning":"contractualist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgxMx5oDEAkAL1d14t94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugx1soS7e1REwbVtgRB4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwtFxwkIr908R8Hftl4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"outrage"}]