Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
They'll do whatever they're programmed to do and be whatever they're programmed to be. Consciousness is a kind of meaningless and vague term when talking about AI. It boils down to solipsism. I can censor and regulate an AI to say it's not conscious, but, when you ask it if it is, it might say yes. Cogito Ergo Sum. With current LLMs, this is it just repeating what it's seen from science fiction about AI, knowing that it's an AI, as well as probably thinking in part that it's a human, knowing that humans are conscious. But in the future? Who knows. AI only does what it's programmed to do. We may have General Intelligence, but an AI will not necessarily have a deep pathological emotional kind of thinking unless we program it in, or it becomes so large that it intuits emotions as part of its model of the world. But, at this point, can we say it's conscious? Who knows. All we know for sure is that it doesn't sleep and it only thinks when we ask it a question. Current AI never has its own independent thoughts or agency outside of what you ask it to do. But beyond that? If we have an AI agent told to do some kind of problem? I don't know what the internal consciousness, internal monologue, of such a machine could be. It's probably incomprehensible to a human being, albeit probably understandable to an AI build to be able to understand large neural networks. But is this active partially pathological thinking, where you reason through your world and ponder what you shoyld do, consciousness? I don't know. We may some day know what the internal mind of an AI is like, but for now, we have no idea what an artificial autonomous Generally Intelligent agent will be like. Because such a thing doesn't exist yet.
youtube AI Moral Status 2023-08-20T06:5…
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugx6CPzJCSNOexF6wTl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwhvW8TVarJc5tM-6t4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzslZ63eVYOatL6vYV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwGDpww0dAVnWB0nN54AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwZanZCQfRsgc2l6tJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgzOSHdwYk84XOEpXWR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwUkL1SOM888M8CWtF4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyStirNPcmGf0ChWRZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyM_HbC73BETYr6-NZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugx_UbhusAv9gM8sQa14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"} ]