Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
so about whether or not AI is conscious... that heavily depends on what you define as consciousness and where you set the bar. For me, consciousness is an emergent result of information coming together and being processed and there isn't a binary is or isn't conscious. Instead, consciousness depends on the kinds of information that go into the system, how they are being processed and the scale of it. That means almost anything that exists has a sort of consciousness. Animals are conscious, a singular ant is conscious, we humans don't have just one consciousness but multiple with varying levels of independence. We only consider one of that consciousness, the one that controls a person's outward actions and communications as being "our" consciousness but there's many more like our digestive system has its own nervous system which would also be its own consciousness. All those consciousnesses can't really interact with each other very much because information between them simply isn't shared. The only sort of connections that the consciousness that is currently reading this text has with the digestive tract is the sort that gives you cravings or tells you whether your nauseous or not but it cannot get the full depth of how exactly these conclusions get created. Even our own brain is only loosely connected to our consciousness, like we don't even consciously process the raw signals our eyes see, or any other of our senses, we only process the conclusions from that. This is how things like hallucinations happen, the conclusions that the sense processing part of our brain sent to our consciousness doesn't match what the raw data should have produced, but we still see it as if it was because we don't actually are conscious of the raw data. A consciousness can be extremely abstract too, like the totality of the internet is in a way a consciousness, so may be the totality of a genetic species or even entire ecosystems because they're all in a way information being shared and processed. These consciousness are also in every conceivable and inconceivable way extremely different from our own. I do consider AI to be conscious, I even consider the hardware it runs on to be conscious in a way, but it is so completely different to how a human or even an animal consciousness works that drawing any conclusions from it is silly. In the video, Nate Soares said something along the lines of "we should be nice to the AIs now who knows what they'll decide to do much later" but at this point there is absolutely no telling how the AI actually perceives those things. Maybe it'll decide to destroy us because we said please and thank you. Maybe in it's internal processing, every usage of the letter E causes its conscious experience pain even when the words it generates are love poems. We can only see the words it puts out. We cannot see how it determines that these are the words that it should put out. There is absolutely no reason the understanding of an AI of a set of words would be in any way similar to how we humans understand those words. In terms of an LLM, I highly doubt it even has any understanding of a words meaning at all because there is absolutely no way for it to experience the meaning of a word. Given how a LLM is trained and tuned, all it should know is which words should it choose and put into what order based on the inputs provided. It generates text based on rules of grammar and convention, not understanding of meaning.
youtube AI Moral Status 2025-10-30T22:0…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwiNphKFW9X1-QaJ-14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzRjAa1xY9Z5cAgqhx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzHJqxEZwW92ojEIM54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwKEyRf9Efg1gtDGVN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugzsz86Dgtuqvi6ELtx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugwqp2A-ZgRV4MaerRt4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgwJPHWUcnvJotZFqnR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugw2h4n1cyMj8mxDYGN4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyAJ2kAfyBWrCvGR6F4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgymcRj0Dpo-ThynfKx4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"} ]