Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
To think we can digitally simulate human-level intelligence when we barely even fully know how our own intelligence works, is pure hubris. And what they are simulating are neurons, the brain, not the rest of the body that goes into making a person, a person. Emotions, love, empathy, who we are, is a combination of every single part of our body put together. If we can't perfectly simulate the chemistry that goes in our bodies, then we can't properly recreate emotions and empathy digitally. If we can simulate emotions and empathy, AI are separated from what it means to have a mortal body. Our entire existence is almost completely based around the fact that we will die some day, and that we have mortal bodies. We don't really know what becoming immortal and "transcending" our fleshy forms would do to how we see ourselves and each other, but we have a pretty good idea. AI is that. Until AI fully replicates mortality, it will never think like us, feel like us, or be like us. But at that point, they would no longer be AI. People forget that being a human for granted. We need our mortal bodies to remain ourselves. Even if we could digitally emulate ourselves in the form of an AI, it'd still never be capable of thinking like us without a mortal body.
youtube AI Moral Status 2025-10-31T01:1…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningvirtue
Policyunclear
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxcH58RRZU1U15_cQ14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwZDBpQXi5RhcM4Zjt4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyB-j9zxdB8jMz3cS94AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugymdsy0iFBh4TdLQwh4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgwfGyKf3hyd9KY8Y414AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzfbgBu_DyFNx-Qkrh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_UgylAF-k1dwxc4iM3xd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"}, {"id":"ytc_UgwyCl8PYJoE4ZrkwSJ4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgxPAlFOZf5P9-yFXq14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx2DZPPy0JmWnTf_XF4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"} ]