Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Generative AI is definitely not conscious and definitely doesn't feel anything. Him saying its a guess and not a strong guess is so odd. Even if it were an AGI or whatever it still wouldn't feel and wouldn't be conscious in the same sense that we are... and we would know it. Because, feeling isn't a rational process of the human mind, its a biological process. It's chemicals causing an interplay of sensations and then reciprocating back to the mind interpretations of those sensations. What I mean to say is that thinking 'that thing is dangerous' isn't fear. Clenching of the gut is fear, a burst of adrenaline is fear, fight or flight is fear. AI does not and will not have any sort of capacity for these things until embodied as we are. There is no machinery for it, nothing even approaching the capacity for it. We as humans really love to personify and anthropomorphize. I saw a video the other day of someone feeling sorry for a spider over something that only a modern human could possibly have feelings about. The spider isn't capable of those sentiments and wouldn't have them if it were. Yet we project it into them. We do the same with AI and it feels so strange to see such intelligent people fall into this trap, seemingly. We are so used to the experience of us that we fail to trace back all the things that come together to make it possible. No amount of knobs are going to get AI to feeling... just to a similacrum of feeling that dupes apparently even the best of us. That is the most concerning thing to me. I am more afraid of our inability to comprehend AI than AI. AI will never feel until we embody it in a complex biology. It will never be like us until it is actually like us. But the whole time we will spin ourselves in circles about it.
youtube AI Moral Status 2025-10-30T20:2… ♥ 5
Coding Result
DimensionValue
Responsibilitynone
Reasoningdeontological
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugz2hE4E9CpReAma_314AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyVhIdzqGhq2H8bhZ14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugy0JaoExU09PGg4pix4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgylAN63kd9MWjd0ItB4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgySFs0PK_gxMIVFjUt4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"fear"}, {"id":"ytc_Ugxji0AkAMbhhb3hnvB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwmXX5ZRECLrKUcnkV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytc_Ugz3BKRuZPR0QtUOShF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwD_h3DASRiroe1Ylp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugx0mznNrHBTky3gjYh4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"} ]