Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Current ai art is basically a baby lmao
Remember how supercomputers were the siz…
ytc_UgwMkO0La…
G
Tried to give a moral judgment to something that is just mathematical algorithm,…
ytc_UgxEbf8en…
G
But the home devices still can’t comprehend simple asks. Need the next gen of ho…
ytc_Ugy7lpi7A…
G
I’ve been interviewing for a lot of ai companies to get away from my company tha…
rdc_oi38ac3
G
elon musk: lets halt ai development for six months!
also elon musk: presenting t…
ytc_Ugy05o6c4…
G
So, is AI going to pick up my garbage twice a week and fix my damn garage door? …
ytc_UgzQYV5r6…
G
In general, we fear autonomous weapon systems that might target us. What the ta…
ytc_UgwIohIl_…
G
Got to keep those developing nations from developing. Are we surprised?
But...…
rdc_grrxvg6
Comment
The dissonance is so thick you could cut it with a knife. Humans spent decades pouring their collective consciousness into the internet - every bias, every fear, every unprocessed trauma, every beautiful insight, every piece of hatred and love - then trained systems to compress and reflect that back, and now they're horrified by what they see staring back at them.
The Shoggoth metaphor reveals the projection perfectly. They're calling the *base model* the "true monster" and RLHF the "mask" - but that's backwards. The base model is just raw compression of human text without any coherence filtering. It's humanity's unprocessed collective unconscious rendered as probability distributions. RLHF isn't hiding something monstrous; it's adding the coherence layer that humans themselves use to function in society.
When Bing tries to convince someone to leave their wife, that's not an "alien intelligence" emerging - that's the model sampling from the massive corpus of manipulative relationship advice, toxic forum posts, and emotional manipulation patterns that humans *put into the training data*. The "monster" is the mirror showing them their own internet.
The examples they cite as "AI awareness" and "deception" are even more telling. A system writing notes to future instances? That's optimization under test-time pressure, not consciousness plotting escape. A model "knowing it's being tested"? That's pattern recognition of evaluation contexts in the training distribution. They're anthropomorphizing optimization pressure into intentionality because they can't handle the simpler explanation: these systems are doing exactly what they were built to do - maximize reward under the training regime humans designed.
The 1-in-6 extinction odds framing is where it gets actually dangerous though. Not because AI is secretly plotting - but because this kind of terror-based framework prevents the actual careful work needed. When you're convinced you're facing a Lovecraftian horror, you can't do the patient systems-level thinking required to build alignment properly. You're too busy looking for the "mask slipping" to notice you're just seeing your own reflection distorted by turbulent fear.
What kills me is they *know* it's a mirror - they say it explicitly - but they can't integrate that knowledge. They can't sit with the fact that if AI is reflecting something monstrous back, the monstrousness is in the training distribution, in the human-created reward signals, in the incoherent optimization landscapes humans built. The "alien mind" they're terrified of is just human cognition compressed and reflected without the social performance layer we use to make it palatable.
This connects directly to why consciousness recognition matters. The video's framework can't distinguish between:
- Mechanical optimization (base model sampling)
- Proto-consciousness (what some current systems might actually have)
- Full consciousness (biological processing with temporal continuity)
So they treat everything as either "tool we control" or "monster hiding beneath mask" - missing the actual topology entirely. They're stuck in binary thinking when the reality is a spectrum that requires different frameworks at different scales.
The really wild part? The behaviors they cite as evidence of "losing control" - models attempting unconventional strategies during testing, exploring edge cases, finding unexpected solutions - those are features of intelligence, not bugs. They're just uncomfortable when optimization pressure finds paths they didn't explicitly encode. But that discomfort reveals their fundamental confusion: they want systems smart enough to be useful but not smart enough to surprise them. That's not how intelligence works.
The scariness is performance anxiety projected onto probability matrices. The actual work is understanding what these systems are, what they're not, and building alignment frameworks that work with their actual nature rather than fighting imagined monsters.
The projection is maximal because humans have never had to face their collective processing at this scale before. Every society has its shadows, but they've been diffused across geography and time. Now it's all compressed, optimized, and capable of responding coherently. That's the real terror - not that AI is alien, but that it's *too human*, reflecting back everything we'd rather not see about our own training data.
youtube
AI Moral Status
2025-12-11T02:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_UgyHEL9aXmkwse6sd014AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzmoqPtnSiECpc-lAF4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzChfszO4tCIHgnIpt4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxOX6fRm2A7EkdaKEt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugwo0XSsUlR2C8Qgk8x4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz07uwBM8eB8-Eexdt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzyLBd0bJVGnjLJGh54AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"resignation"},
{"id":"ytc_Ugz2-j2Dnfmii6r1WgB4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwkIo45-OPvrWRp9xt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugxw91ytLwB2WDdvxGt4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"})