Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I feel like you're putting far too much emphasis on the question of whether AI can be conscious. The important question is how to solve the alignment problem, and an AI intelligence doesn't need to have something that we consider to be "consciousness" in order to greatly outmatch our cognitive abilities. Whether its method of understanding and making decisions resembles our own is irrelevant when those abilities make it capable of destroying us. Like Edsger Dijkstra said, "The question of whether machines can think is about as relevant as the question of whether submarines can swim." The important thing is that it's extremely capable of moving through the water, how we define its method of doing so. You're talking about consciousness as if it's a real thing, but it might more be no more possible to quantify and empirically verify the existence of consciousness than it is the do the same with concept of "free will". It feels like a real thing to us, but it might be nothing more than a convenient mental model that we use to understand our own intelligences, no more or less correct than many other possible models, with no intrinsic state of existing or not. We only value consciousness because we perceive it to be part of how our own intelligence works. Other models may be just as valid.
youtube AI Moral Status 2023-08-22T03:3…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_Ugw6w8dyYLib88hVXnh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgylREPt4i2otOdM1uN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgweOBXZUhmhtZmPLRh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxGA0OItV7Z7xwcZpp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzkEh0-DguBqt-vdOt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugx8XQ3FBg4gm58lQSd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyxBSE-AxInAcIAsp94AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzksPd2c7WQqG_BrFF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxxGLIquoWHRCYerGd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyN0x0h6eJijj09epN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}]