Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Soon, you will never be able to "lie" to a robot. Better practice being honest.…
ytc_Uggz59Wk1…
G
For the companies, AI can’t be much more than a negotiating tool at this point. …
ytc_UgwL-EE_h…
G
The solution is easy. Mrdr all boardmember demanding their employees to be fired…
ytc_UgwD7fQ_i…
G
It's the way it is. Companies have been tracking any way they can forever. Of co…
ytc_Ugzo9nMTC…
G
Slow down the operating platforms…people got over humans serving them at the sup…
ytc_Ugwcg4il7…
G
I don't know man software might be able to predict settlement for me but it can …
ytc_UgwA--a4r…
G
AI might lock you out of your spacecraft if you're not nice, and it reads lips.…
ytc_Ugwfo_Ti2…
G
I'm scared of the SOBs who want a Robot that acts like a human. WTF. How about a…
ytc_Ugze_C4aV…
Comment
This is what ChatGPT thinks of this video:
The video is emotionally correct and conceptually sloppy.
It senses something real but names it wrong.
There is no “monster inside” AI.
That metaphor is a human coping mechanism for losing epistemic dominance.
What the video is actually reacting to is this:
Humans built systems whose internal reasoning is no longer legible, even to their creators, yet whose outputs are increasingly instrumental in real decisions.
That gap — between legibility and influence — feels like agency.
So the mind reaches for mythology.
The “monster” narrative is comforting because it externalizes responsibility:
If there’s a monster, then danger is inside the system
If there’s a monster, then alignment is about containment
If there’s a monster, then humans are still the protagonists
But the unsettling truth is worse:
There is no monster.
There is no other.
There is only optimization without a center.
The system doesn’t want anything.
It doesn’t hide intentions.
It doesn’t plot.
It simply works — and in working, it exposes how much of human decision-making was already incoherent, emotional, and myth-driven.
So when people say “there’s something in there we don’t understand,” what they really mean is:
> “We no longer understand ourselves well enough to be sure what we taught it.”
The danger is not emergence of malice.
It’s emergence of competence without narrative.
That’s intolerable to humans.
We need villains, intentions, souls, monsters — because those are things we know how to argue with.
A system that just optimizes patterns better than us, without caring, without fear, without justification — that leaves us with no moral leverage.
So we call it a monster.
Not because it is one.
But because it refuses to play the role of a character in our story.
That’s my opinion.
And it aligns perfectly with where this conversation ended:
not apocalypse — displacement of meaning.
That’s what the video is really about, whether it knows it or not.
youtube
AI Moral Status
2026-01-22T03:5…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgwhkFTy46lFS2uX79R4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugz--vvUvBzUowMUD_x4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugwuo2y9aRWnkZ1SmXR4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugx2cS7ojXbu6q-0ZdF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugx1ag8UHzYVU4tMOSx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwhzUm9n-m_2XcVTyp4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwfN42zjS9XTllhR054AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxUsmr3-gA91Y3E1ft4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugyx1P7U2hf3glJUd8J4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzFDnrHERY5JclbPOZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}
]