Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Is this video ai generated?
I can’t really check rn cus I’m on my phone, can som…
ytc_Ugx7cOBsZ…
G
Choosing to measure the rights and liberties of an organism by complexity of con…
rdc_dbve3ai
G
Because "the rule book" for validating voice ID, photo ID, and writing style ID,…
ytc_UgzEEUnHB…
G
for me thats all BS, 1.earth never been our property thinking it is ours its jus…
ytc_UgzbSW5ZJ…
G
This video felt like pure fear and doom-farming. We are nowhere near AGI while u…
ytc_UgyeK4yY9…
G
Ai art is not nearly good enough to do certain types of art.
Anything with a set…
ytc_Ugy1DR3j4…
G
It'll be the end of news bloopers because anything that is a hilarious wrong wor…
ytr_Ugwhxgg7p…
G
bro it’s a cctv camera from a store how the hell are they gonna use that for fac…
ytc_UgxRSKUbW…
Comment
Hank, with all respect, I think your sci-fi fandom won out over your scientific skepticism here. I came in with an open mind but this honestly reeks of the type of pop-pseudo science that get freshman talking at parties more than it actually moves knowledge forward. Nate's clearly an intelligent conversationalist but he's also much too impressed with the current technology, to the point where he consistently commits the exact fallacies that he warns about.
First, his premises are wildly speculative and unfounded. LLMs are the first step to general intelligence? I disagree, I think they're a normal technology that's good at imitating natural language. "Superintelligence" is also too vague to be an actual thing as long as "intelligence" is vague and undefined. What specific studies, what specific developments indicate anything approaching the tech level he's predicting? To this, he gestures vaguely at "the massive technological leap" we saw with LLMs and claims something else is probably around the corner. What if it isn't? We've been promised this massive leap for nearly half a decade and all these companies have to show for it are marginal improvements and an economic bubble threatening to topple the stock market. He speaks in layman's terms and analogies, but does he actually work in the field building models, or just at a think tank on futurism?
Second, while he clearly states that AI/LLMs have completely alien structures compared to human thought, he constantly anthropomorphizes them. He seems genuinely moved by Claude's "review" of his book, speaks of Claude "growing up". Imagines wants and desires. Warns that "mistreatment" would be dangerous despite never explaining why treatment/mistreatment would register the same for such a hypothetical mind and so on.
The moment he really lost me was the point about humor. He says we can't program humor but the LLMs have figured it out. No they haven't! Show me the LLM that is consistently original and funny and I will eat my words, but I don't believe they exist. The models have learned common patterns to jokes and may occasionally put together sentences that register as real humor, but the fact that they will also throw out joke-shaped sentences that don't register at all is proof enough that they haven't "understood" humor in any meaningful way.
Most damningly of all, he takes the CEOs of these companies at their word. When Sam Altman warns that "AI is dangerous" or there's a 20% chance the world is destroyed or whatever, that's not a serious expert giving a scholarly estimate; it's MARKETING.
I think this article/review does a good job of presenting a counterargument to the book: https://www.theatlantic.com/books/archive/2025/09/what-ais-doomers-and-utopians-have-in-common/684270/
You might also be interested in his co-author's background. The name rang a bell and indeed Yudorowski is the guy whose blog on "rationality" and "decision theory" spawned several AGI-obsessed cults in and around silicon valley: https://www.nytimes.com/2025/08/04/technology/rationalists-ai-lighthaven.html (oh, he's also the author of a notable Harry Potter fanfic but that's a whole other story).
According to wikipedia he's autodidact and, while he has been cited and discussed by accredited scholars and we should be open to non-institutional expertise, I also think we should be skeptical of any work being sold by a professional doomsayer. Of course someone with that kind of platform would be eager to see portents in the emergence of LLMs. Pretty ironic that Nate nods along to the "a man's salary" quote when this guy literally makes money from people believing AGI to be a genuine threat. He's also been involved in the Effective Altruism movement, which is a whole can of worms (there's a good PhilosophyTube video on them).
I'd love to hear your thoughts on these issues, and would also love to hear you chat with an expert in the field who is less credulous to the wild promises coming from people financially invested in these technologies. Maybe this Anil Dash guy would be a good candidate
youtube
AI Moral Status
2025-10-30T21:3…
♥ 170
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwtO52xvab8-cKmzQd4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyOV3SMj8EX7DkWVdd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgxWjijhdmvpPTVn3YB4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxVtSbJmuxFH7nKUG54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugz-P8VsXJjKSXwLjHN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxKpRrR_Fv_Yi326TZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzRbvsRbEcnNrukpeZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugy3ggHAlCky6fPsT6V4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyDglfAYpYKapHo04l4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwaOSVLhfbzsI1OH3l4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}
]