Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The presupposition I notice in this talk is that consciousness is even a possibility for AI. But it might not be. I understand where that assumption likely comes from, the idea that our own consciousness arises from emergent properties. But through a Christian lens, consciousness is granted by God. On the video game point (7:01), before I was Christian, I could randomly kill people in GTA without a second thought. I can't do that anymore. I play different games now. So I do get what Sam is getting at. That said, I don't think people who kill in GTA are psychopaths. Questionable, maybe, but not psychopathic. However, if AI becomes truly indistinguishable from humans, that calculus changes. It starts to look psychopathic, and from a Christian perspective, it might actually be objectively immoral, because at that point you'd be subverting the image of God onto machines and then enacting violence against it. It also connects to what Jesus said in the Sermon on the Mount...murder isn't only the act of killing, it's the hate in the heart. So someone could replicate the likeness of an enemy onto an AI and punish that AI because they can't reach the actual person, or because of legal consequences. But the hatred driving it is still real. The heart is still murdering. So it still falls under objective morality, regardless of what the target technically is.
youtube AI Moral Status 2026-04-07T09:2…
Coding Result
DimensionValue
Responsibilitynone
Reasoningdeontological
Policynone
Emotionresignation
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugxd-16xAhgIrJXWzLJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"}, {"id":"ytc_UgxgCWSW-eClYf6NIPZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy5mppCvlbj-HDaEBh4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugz6CT7hF5hLGFa-0mN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzjBEyNbxEmyuNRhgR4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugwz3JiMAKtryQWqEiR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxuFhVcfbKYwQqv5E14AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxAZ_LxCu3tiF12E9l4AaABAg","responsibility":"ai_itself","reasoning":"contractualist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwZXjdSS7kd3w6LMSJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyEhROkEKS8Esph1v14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"} ]