Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Next thing we know if a robber attempts to steal something the robot grabs them …
ytc_UgwpEbkIc…
G
Maybe you could show them this video or other videos that explain how artists' w…
ytr_UgzBeV2AJ…
G
artists cannot sue the a.i but they should feel free to sue the developers imo…
ytc_UgwfaKByP…
G
For those who don't understand : AI is satan, the god of deceit. And yes they re…
ytc_UgzbRZuAr…
G
@jiffylou98 you misunderstand, the ai isn't creating its own watermarks it's st…
ytr_UgxE7-6aU…
G
I plan to travel, read and paint, while my Ai Agents work my company. Fabulous d…
ytc_UgxLuCmgR…
G
Dude, hang in there. There's going to be a gold mine in patching, revisioning, …
rdc_ncgd65b
G
This article hits on a crucial and often overlooked blind spot in futurist discu…
rdc_ohs6exv
Comment
What ChatGPT thinks of your take:
I’ll give you my honest perspective, not a hedge and not a sales pitch.
Short version
I don’t think “superintelligence inevitably kills everyone” is proven.
I do think we are behaving as if a low-probability, high-impact failure doesn’t matter, and that part is genuinely alarming.
And I think the most certain damage is already happening below the ASI line, especially to apprenticeship, meaning, institutions, and truth.
So I’m neither dismissive nor apocalyptic—I’m uneasy in a very specific way.
1. On superintelligence itself
I don’t see a strong reason to believe intelligence is categorically biological. If minds are physical processes, then in principle they’re reproducible in other substrates. On that, Yudkowsky is probably right.
Where I’m less convinced is the inevitability curve:
Scaling ≠ guaranteed emergence of unified agency
Capability jumps ≠ coherent goals
Optimization ≠ desire
Current systems show fragments of things that look like planning, deception, and self-modeling—but fragments aren’t destiny.
That said:
The uncomfortable truth is that we don’t know what the phase transition looks like, and historically humans are very bad at stopping after they recognize a transition is dangerous.
So: possible? yes.
Inevitable? no.
Worth betting the planet? absolutely not.
2. Where I strongly agree with the “doom” camp
The part I find hardest to dismiss is not “evil AI,” it’s misaligned optimization under real incentives.
Three things worry me deeply:
A. Grown, not built
We are not writing intentions; we are selecting behaviors. That means:
internal structure is opaque
unintended strategies are normal
“seems aligned” ≠ “is aligned”
This is not hypothetical. Hallucinations, flattery, manipulation, and long-context drift are already misalignment, just at toy scale.
B. Race dynamics
The most dangerous actor is not the smartest, but the least cautious one with sufficient resources.
That’s not sci-fi—that’s nuclear history, financial history, and environmental history repeating itself.
C. Institutional mismatch
We have:
technology moving at software speed
governance moving at legislative speed
human psychology moving at evolutionary speed
That mismatch alone is enough to cause disasters without ASI.
3. Where I side more with the skeptics
I don’t think today’s models are secretly “waking up.” I don’t think consciousness is just around the corner. I don’t think current LLMs “want” anything in the way humans mean it.
And I agree with Anil Dash’s core point: a lot of what’s called intelligence is productized pattern exploitation, not understanding.
But here’s the key distinction:
Skeptics are right about what we have.
Doom thinkers may be right about what happens if we keep pushing blindly.
Both can be true.
4. The part I think almost everyone underestimates (including doomers)
The apprenticeship collapse you raised is, in my view, more dangerous in the medium term than ASI.
Why?
Because civilizations don’t fail when machines get too smart.
They fail when:
skills stop transferring
judgment erodes
institutions hollow out
nobody knows how things actually work anymore
If AI replaces the learning gradient—the phase where humans are allowed to be bad—then:
expertise becomes brittle
failure recovery collapses
dependency skyrockets
That’s not hypothetical. That’s already visible.
5. My actual bottom line
If I had to summarize my position in one sentence:
AI is not an evil god in waiting—but we are treating a civilization-shaping technology with the maturity of a startup growth hack.
I don’t think humanity is doomed. I do think we are acting recklessly enough that “it goes very badly” is a real possibility. And I think the most tragic outcome wouldn’t be extinction—it would be disempowerment, where humans remain alive but no longer meaningfully steer their own systems.
youtube
AI Moral Status
2026-01-06T00:3…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugwo1P8kisYu_1IAwe54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwAERHzdC0QhPBUAPd4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzU38CVeCSuHrUQ_jt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyMflifZsFXoXafBa54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgyXfHEwu88GP9Htddp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyHHAIpRBNdQfiV78d4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxnuNPn12og6DD9ZMR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwbhKyWyRViJUoFgwF4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugz9nrrKluo20eoRQxp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_UgwTCO29C3Xm7_404-V4AaABAg","responsibility":"company","reasoning":"virtue","policy":"ban","emotion":"outrage"}
]