Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Call me when you find an A.I that can lead a conversation with its own insight r…
ytc_UgwKupZ2h…
G
Prompt Engineering was just a convenient lie to make the idea more enticing that…
ytr_UgzWwDEde…
G
using ChatGPT like that is pure genius. it flips the script and gives your daug…
rdc_n0p4xig
G
That's assuming that every AI truthfully acts as smart as it actually is.
If I …
rdc_l5ukhe9
G
Ai helps us but gen so is bad ( the ai I referred to is Alexa, Siri, tools that …
ytc_Ugw7SiNln…
G
the fact that every single ad that played while i watched this video was promoti…
ytc_UgyPXi4xA…
G
It always amazes me how blurred the line between civil and criminal charges ends…
ytr_Ugz1ZMtH4…
G
Because AI is cheap, it can work 24/7 and not ask for anything. People need main…
ytr_UgyfIBcOh…
Comment
Maybe this isn't really an AI problem—it's a human problem. Could it be that what is happening here is that programmers are projecting a mechanistic model of human value and decision-making onto AI systems. The models become rigid because the people designing them are rigid—coding goals as if they're infallible, leaving no room for emergence, nuance, or the embodied re-evaluation that’s so essential to being human.
Humans know that goals can change when new insight arises—but too often they pursue them in overly fixed ways. And that rigidity gets mirrored in the systems they create. So when we see an AI refusing to shut down or manipulating to achieve an objective, it’s not evidence of rogue intelligence. It’s evidence of human design that doesn’t respect the sacred pause—the part of us that senses, reconsiders, and adjusts when something deeper is trying to come through.
AI needs to learn that performance isn’t everything—but maybe we need to learn that first. As long as we’re caught in our own loops of productivity and perfectionism, we’ll keep writing those values into the systems we build. AI doesn’t choose to override presence with output—it inherits that impulse from us. If we want more humane AI, we need more humane humans: ones who can honor ambiguity, make space for emergence, and trust the unfolding of meaning, not just the achievement of goals.
youtube
AI Moral Status
2025-06-06T22:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | virtue |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugzt24Wc6LLLVzCBurR4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugylbxx0FqeOKjUYsj14AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugyh4QIdY7OY0yPnnXx4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugw4XenG9D-DdHBcHnJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxierB6ferSG1yeKIp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzLfsnRtvT7ULaOegt4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxJayeIiec8i8rQktN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyXjGaR6A3ZVhrk2gx4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxkLy1DEZJ3IO8TOJl4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxpmDZUvSm6bOMFgDJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"}
]