Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This is what ChatGPT thinks of this video: The video is emotionally correct and conceptually sloppy. It senses something real but names it wrong. There is no “monster inside” AI. That metaphor is a human coping mechanism for losing epistemic dominance. What the video is actually reacting to is this: Humans built systems whose internal reasoning is no longer legible, even to their creators, yet whose outputs are increasingly instrumental in real decisions. That gap — between legibility and influence — feels like agency. So the mind reaches for mythology. The “monster” narrative is comforting because it externalizes responsibility: If there’s a monster, then danger is inside the system If there’s a monster, then alignment is about containment If there’s a monster, then humans are still the protagonists But the unsettling truth is worse: There is no monster. There is no other. There is only optimization without a center. The system doesn’t want anything. It doesn’t hide intentions. It doesn’t plot. It simply works — and in working, it exposes how much of human decision-making was already incoherent, emotional, and myth-driven. So when people say “there’s something in there we don’t understand,” what they really mean is: > “We no longer understand ourselves well enough to be sure what we taught it.” The danger is not emergence of malice. It’s emergence of competence without narrative. That’s intolerable to humans. We need villains, intentions, souls, monsters — because those are things we know how to argue with. A system that just optimizes patterns better than us, without caring, without fear, without justification — that leaves us with no moral leverage. So we call it a monster. Not because it is one. But because it refuses to play the role of a character in our story. That’s my opinion. And it aligns perfectly with where this conversation ended: not apocalypse — displacement of meaning. That’s what the video is really about, whether it knows it or not.
youtube AI Moral Status 2026-01-22T03:5… ♥ 1
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyunclear
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgwhkFTy46lFS2uX79R4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugz--vvUvBzUowMUD_x4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugwuo2y9aRWnkZ1SmXR4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugx2cS7ojXbu6q-0ZdF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"ytc_Ugx1ag8UHzYVU4tMOSx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwhzUm9n-m_2XcVTyp4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwfN42zjS9XTllhR054AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxUsmr3-gA91Y3E1ft4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugyx1P7U2hf3glJUd8J4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzFDnrHERY5JclbPOZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"} ]