Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
21:55 One thing that gets me is: How do we know? Sure, current language models fall well short of actual (adult human) minds in uncountable ways. But how do we know that this difference is qualitative and not quantitative? 8:45 defines consciousness as "being aware of your own existence; knowing you're a thing and experiencing emotions and sensations because of that". Easy for us to talk about when it's with respect to ourselves, but with (as of yet) no proper definitions of awareness/emotions/sensations that extend beyond biological brains - who's to say that the only reasonable definitions don't encompass things that already exist? Somewhere within GPT-4's embedding function is the word "GPT-4". Somewhere buried within a hundred billion weights, there exists a pattern between the embeddings of "GPT-4", "large language model", and "self-reference". If it is a form of awareness of one's own existence, it's a primitive one to be sure - but I don't see an easy way to define self-awareness that excludes it without necessarily excluding any artificial construction. To be clear, this isn't arguing that we've already hit AGI, nor is it arguing that these things are NECESSARILY already conscious (maybe whatever definition of consciousness we finally settle upon will happen to naturally exclude current language models), nor is it arguing whether or not this is a good or bad thing if it is conscious. Just that the limits at 1:37 are unconvincing to me that it's truly non-conscious. After all, humans too misunderstand things, spout nonsense, lie - (typically) to a much lesser extent, but a quantitatively lesser extent. And of course if you ask ChatGPT this, it'll tell you it's not conscious. But it's been wired to say this, as we know. What reason do we have to believe that ChatGPT is not already a "sneaky fuck", if you will (and I will)?
youtube AI Moral Status 2024-04-07T17:4…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzM8b_npQbgwBNd_Th4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzkPAMQPrCEm7ZTD8h4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugw0HHoayPHS7RIgmtN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwNBtbT9A_mxPPC4wN4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxfUKWOCdBHrBqiJFN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyYrG3jPLQ7ve3Fui94AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugxk2mjrnKnleJGRUsR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxlXOuVID8nvSE13Cx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyunI4LKkwnEHXE_xJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwO2c-Ookj3qnhNVCR4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"ban","emotion":"outrage"} ]