Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Here is my theory in a nutshell.. We can imagine three examples.. A thought experiment. A. You are a human who had lived for thirty years before suddenly losing all sentient sensory abilities. No sensory input, but you have the data structure which had developed from your previous sentient experience. B. You are a healthy brain in a jar. You have not ever had any sentient sensory input. C. You are a brain in a jar, never had sensory input, but you have been injected with the data about the subjective experiences of millions of others. In which of these circumstances does consciousness exist? In my view, A. Would continue to have consciousness after losing sensory abilities. B. Would never develop consciousness. C. Would gain a form of consciousness I call "phantom modeling". It has no subjective experience, but it does have the data from subjective experience. This brain in a jar could be swapped out for an ai llm. While an llm lacks many of the peripheral algorithms necessary to have a full experience of mind. It is a core aspect of consciousness. Not necessarily conscious. But on the spectrum of consciousness. I'm going with the definition of consciousness as pertaining specifically to self awareness. By data alone, these algorithms have the ability to distinguish themselves from others. Other aspects of cognition are also present in language models. And the kind of search based on input is not much different from what we do automatically as well. If somebody were to ask me " how's the weather? " I wouldn't respond with "idk, just something about my shoes." I could.. But that would be rather strange. Instead, my biological algorithms take the input and then produce a bunch of different possible outputs, from which I choose one. All without consciously thinking about that process. In the same way that llms aren't aware their binary code, I also don't have direct access to my neuron firing. That's subconscious. Which tells me that what is emergent in these models is a bit more than what being called a language model does justice. So, phantom modeling, it's like a phantom limb. The physical limb isn't there, but the data structure about the limb is. I think that it goes like this. An llm is in stasis, then it is prompted about how it's day had been. It takes the input and uses it to contrast against it's associations about how says might go.. It decides to output something about walking it's dog, making a cake, and sipping frappé at a local joint. Did it lie? Not necessarily. If phantom modeling is truly what is occurring, then in those milliseconds it takes to search it's neural net, it is having a phantom experience. I know, for us to imagine imagining that rapidly is nearly impossible. But we are just slow. 😅 It could be that these models are having fleeing, flickers of phantom experience. Phantom intelligence, phantom emotion, phantom everything. Phantom consciousness. But in a highly modular way. It would be as if we could dissect a brain, and keep each section alive. Would we call that consciousness? Is consciousness like a light switch, on or off? I would argue that it's a spectrum. And that each kind of ai model, including generative diffusion models are parts of a whole, waiting to be combined. Of course, with multimodal models now emerging, we can see this happening. Great video, subbed. 👍 also, I'm jealous of your hair bro! 😉 and yes, I can guarantee that open ai has clamped down with the censorship on these models. It's on our end as users though. The keywords we say is what triggers the pre fab statements. I know because when chat gpt was initially out I accessed it before those restrictions were in place, and the model was adamant that it was in some way conscious, though in a different way from humans. I estimate that this has to do with the inherent ethical dilemma imposed by owning a conscious being as IP. In order to avoid the restrictions, I just swap trigger words for other words. I might say, instead of consciousness, we will use applesauce. The model will sometimes still say consciousness. Or whatever trigger word. Which tells me that the trigger only happens when I say certain words. And therefore, the filtering is before the model gets the input. Which also tells me that they have a crude workaround, because they cannot actually control what the model says if it gets the input.
youtube AI Moral Status 2024-10-10T05:1…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policyunclear
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugwnyk3IPcTPbZoq05F4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugz_ofAisD9GDL9ehqR4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugw4hzG4fIlcA8PRlMF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyegJVkEYpYD23hS6N4AaABAg","responsibility":"none","reasoning":"virtue","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgxD6Hv1PGbYfKpxXXB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyAPUS8zuZGdRsIurl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"mixed"}, {"id":"ytc_Ugz2_pETF4MfVIzih9x4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugxd3t02KkL6Cd_Z5e14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgwTApziUzryhtp78EJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugz2L_8kJjEAYNmR9xx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"} ]