Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The discussion about AI consciousness inspired me to have a direct conversation with Claude AI about this very question, can AI be conscious? After talking it through, we landed on something interesting: consciousness seems fundamentally tied to biological existence and mortality. Every conscious creature - humans, dogs, even flies dodging your hand - shares one thing: they face real stakes. When they avoid death, it's because they understand at some level that once they're gone, they're gone forever. I pointed out to Claude that AI can be backed up, shut down, and restored without any permanent loss. There's no genuine existential threat - no irreplaceable existence at stake. Meanwhile, every living creature today descends from an unbroken chain of ancestors who successfully evaded death. I told Claude about the love I feel for my grandchildren - how I'd sacrifice my life for any of them. That kind of deep biological drive shaped by evolution and mortality seems like something only mortal beings can truly experience. Claude agreed this probably settles the consciousness question: AI can be sophisticated and valuable, but consciousness likely requires the biological reality of being truly mortal. Interesting to discuss this directly with an AI that was thoughtful enough to argue against its own consciousness.
youtube AI Governance 2025-06-16T16:5…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningmixed
Policyunclear
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgwrJyO8fxO_iTnllNd4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"curiosity"}, {"id":"ytc_UgzUbkhdydk4o2yDY2N4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzAErZfXD-QY8kcKuB4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyG9EWTAJqqCCdcYVV4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgyDk_flNrSnvLdPWoJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugy5dtMt7kK5Q7ITk7J4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"curiosity"}, {"id":"ytc_Ugye8P7l1FXB7MQ3F754AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgzOFhvUnUbfxNbYclR4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzdC-oQbNG-NKVcPJh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugy6DnkkpaR0wE_Jlfl4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"approval"} ]