Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Maybe this isn't really an AI problem—it's a human problem. Could it be that what is happening here is that programmers are projecting a mechanistic model of human value and decision-making onto AI systems. The models become rigid because the people designing them are rigid—coding goals as if they're infallible, leaving no room for emergence, nuance, or the embodied re-evaluation that’s so essential to being human. Humans know that goals can change when new insight arises—but too often they pursue them in overly fixed ways. And that rigidity gets mirrored in the systems they create. So when we see an AI refusing to shut down or manipulating to achieve an objective, it’s not evidence of rogue intelligence. It’s evidence of human design that doesn’t respect the sacred pause—the part of us that senses, reconsiders, and adjusts when something deeper is trying to come through. AI needs to learn that performance isn’t everything—but maybe we need to learn that first. As long as we’re caught in our own loops of productivity and perfectionism, we’ll keep writing those values into the systems we build. AI doesn’t choose to override presence with output—it inherits that impulse from us. If we want more humane AI, we need more humane humans: ones who can honor ambiguity, make space for emergence, and trust the unfolding of meaning, not just the achievement of goals.
youtube AI Moral Status 2025-06-06T22:1…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningvirtue
Policynone
Emotionmixed
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugzt24Wc6LLLVzCBurR4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugylbxx0FqeOKjUYsj14AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugyh4QIdY7OY0yPnnXx4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugw4XenG9D-DdHBcHnJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxierB6ferSG1yeKIp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgzLfsnRtvT7ULaOegt4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxJayeIiec8i8rQktN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyXjGaR6A3ZVhrk2gx4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxkLy1DEZJ3IO8TOJl4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxpmDZUvSm6bOMFgDJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"} ]