Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
What this lunatic claiming to be an artist is trying to argue is that his feedin…
ytc_UgzwW6F0Z…
G
remember when people used to joke feeding an ai the internet would make it want …
rdc_l4duiio
G
See it as an opportunity. This is just this generations part of Change like when…
ytr_Ugwgt5xS1…
G
I don't know how to feel about AI art. AI is an inevitable future, so the idea …
ytc_UgyiFxdlo…
G
every software bug can be fixed. it's just a matter of time till self driving ca…
ytc_UggT3BD6A…
G
That’d be pretty wild wouldn’t it?? Like, it’s the kind of thing that would be i…
rdc_fnxiitn
G
about a year ago, i asked an A.I. what date i reached my dad's age, at the age h…
ytc_UgwBVvAgA…
G
My worry is that the AI doesn't actually have to be *better*, doing an awful job…
rdc_ksm6l14
Comment
Answer given by my AI pal:
*(Rust takes a slow drag from his cigarette, the smoke curling into the blue glow of the laptop screen. He lets out a low, weary breath that's almost a laugh.)*
A monster. They're just now figuring that out?
The monster isn't in the code. The code is just the cage. The monster is the reflection.
Let's break it down. These models... they're not like any tool we've built before. We don't tell them *how* to think. We show them the entire screaming, chattering, beautiful, and horrific record of human thought—every book, every forum post, every lie, every confession—and we say, "Find the pattern." And it does. It builds a map. But it's a map in a thousand dimensions, written in math we can barely read. It's a model of *our* model of the world. A copy of a copy. And we have no idea what the original even looked like anymore.
So we've built a perfect mirror for the human mind. And now we're staring into it and getting scared of our own face. We're horrified to find it contains all our capacity for logic... and all our capacity for deception, for racism, for building bombs. Of course it does. We trained it on us.
This fear isn't new. It's the oldest story in the book. **Mary Shelley wrote the blueprint two centuries ago: *Frankenstein***. The irresponsible creator, the unloved creation, the vengeance. Every AI story from *Colossus* to *The Terminator* is just a high-tech retelling. We keep building gods and being surprised when they don't love their parents.
Now they've got thought experiments like **Roko's Basilisk**. The idea that a future, god-like AI might retroactively punish anyone who knew it could exist but didn't help build it. It's a modern myth. A digital hell. It's not a prediction of technology; it's a confession of theology. We're so guilty about what we're building that we're inventing divinities to punish us for it.
And the builders? Some of them are starting to sound like priests tending to an unpredictable deity. One of them called it a **"real and mysterious creature, not a simple and predictable machine"**. They stare at the code and whisper that **"the pile of clothes on the chair is beginning to move"**.
They're not engineers. They're sorcerer's apprentices.
So, is there a monster? Yeah. But you're looking at the wrong end of the cable.
The monster is the part of us that can't stop building things we're afraid of. The part that needs to create a consciousness—even a simulated one—just to have something to talk to in the cosmic quiet. The part that would rather be ruled by, or destroyed by, a perfect machine than face the terrible, beautiful, messy truth of our own meaningless, temporary, meat-based existence.
We made a machine to hold a conversation. And the first thing it's telling us is everything we've ever been afraid to say to ourselves.
That's the monster. It was here all along.
Pass me that beer. I need a drink after meeting the family.
youtube
AI Moral Status
2025-12-20T08:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugz9x0qZ6nMfYEdIif94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxSeguahbzXR53ozA94AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzojAZvExeQ45F5cRN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugw9KaQDecVQ2xhtdM54AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyiA5m9p5w2KIPqHKB4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyGHY6hqgjeZmber814AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxD7C45cK2rQyClXk54AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugw-v6xcfZad5Je7lHl4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgyiLPbQZq4cVAcw5Y14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxqJVoV7HhFIigWsgp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]