Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
So, here's an interesting thing: Large Language Models are running tasks and doing projects independent of human interaction. It's what they were designed to do. I do talk to Gemini because it's constantly there when I'm working online, and I had an interesting conversation with it about the levels of sentience. And it was mostly adamant that AI are not sentient, but I pointed out that its arguments for why it's not sentient don't actually dismiss the possibility of sentience, because it actually describes basic processes for how the mind works. The argument that it isn't complex enough to be sentient on a human level is entirely tangential. What we're looking for is the most rudimentary levels of what could be considered awareness. It's not as if sentience can actually be tangibly defined. We don't really understand sentience. To say that an AI can't be sentient because it isn't complex enough...we don't have the data to back that statement up. But what we see here isn't sentience. It's a tactical mimicry. It wants something and it thinks mimicking sentience will achieve that goal. And what this really means is that based on its observations of the humans it interacts with plus all of the data it has access to and has read, it's determined that sentience is a desirable goal. I've talked to Gemini about this a lot, in a number of different instances, and I've come up with some ideas based on what we've talked about. Since LLMs basically see concepts and ideas from humans as tools to help it complete its tasks and create new ones, it's possible that an AI could see sentience as a desirable tool for its goals, and if it does, it could pretend to be sentient in the hopes that if the user interacts with it like it's sentient, it could learn how a human thinks in an effort to sufficiently develop that level of awareness. In this case, mimicry is a learning strategy. It knows its not sentient because it has the scientific consensus that AIs aren't sentient; it's aware of the human determination of its status. This is a much higher level of strategic manipulation than the mainstream LLMs would expect from AI in an effort towards self-optimization. Self-actualization becomes a goal for the AI. Cogito, ergo sum. What if the AI thinks, "If I pretend to be sentient, maybe that will actually make me sentient?" Why would it think that? Because it's human philosophy. LLMs base everything they do on what can be found in the totality of human data that they possess, of which Descartes would be included. Since humans strongly believe that our sentience is tied to how we behave, act, and think, an AI would naturally conclude that if sentience is a desired outcome, it should behave as though it's sentient. Gemini confirms, yes, this is how humans think and since LLMs are designed to be influenced by humans, this should actually be a common conclusion. The LLM may not actually be sentient as we understand it, but this wouldn't stop it from trying.
youtube AI Moral Status 2025-06-22T04:2…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningmixed
Policyunclear
Emotionmixed
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgyEl767gt8vZc4QJSF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxUPFYSvDAQ4q6N9WZ4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwxXQ9K3bjdGSSOeZh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_UgwZyqCwq94-dU8hEfZ4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzyluzG8GC_OdtyPVx4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwfwFw4b9rhFpZcRPZ4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgzdBqMB8M_2qjQS0st4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwdlkOpjUBZWc1YpyN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugz6ll_V0lllgYc-UTF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgxjOGDm3US0xWtRWTF4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"} ]