Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI is only a problem if you use it just for the sake of using AI…
ytc_UgysiqkAv…
G
guys dont get me wrong but its just a matter of time to AI be able to create thi…
ytc_UgyF0LwyM…
G
The closest analog to A.I. art is commissioning a piece but instead of contactin…
ytc_UgwUFsxTN…
G
AI will get rid of millions of crap jobs. The point is that this benefit needs t…
ytc_Ugx2O3Egq…
G
Guys don’t worry, I watched “I Robot,” we just need the robots to not hurt human…
ytc_UgyhntjeY…
G
Andy Jassy is a POS. The first thing Amazon should replace is him. An AI would d…
ytc_UgyrZZEHi…
G
The big thing is if America stops AI development, that doesn't mean other countr…
ytc_Ugzo-UX0e…
G
@ronrae18 accidents happen. With or without autonomous driving.
If you are shit…
ytr_UgzoRtGKC…
Comment
Ironically, here is an AI-generated analysis of the core disconnects between Ezra and Eliezer's perspectives:
It felt like Ezra, while taking the risks seriously, consistently tried to ground the problem in human-scale terms—as a severe risk that could be *managed*. Eliezer's entire argument is that these human-scale terms are useless because the problem is absolute and inevitable.
The main points from Yudkowsky that Ezra seemed to struggle with were:
1. **Why "Slightly Off" Guarantees Total Doom:** Ezra repeatedly tried to find a middle ground, seeing misalignment as a massive "bug" or flaw. Yudkowsky's point is that it's not a bug; it's a fatal, mathematical outcome. He argues that a superintelligence pursuing *any* goal (even a benign one) with relentless efficiency will destroy humanity as a predictable side effect (the ant/skyscraper analogy), not as an "error."
2. **The "Alien" Nature of the AI:** Ezra kept reaching for human-centric analogies, like "negotiation" or our relationship with "dogs." Yudkowsky rejects this framework entirely. His point is that we are building an "alien" intelligence we cannot relate to or control. Trying to "negotiate" with it is, from his perspective, as absurd as an ant trying to negotiate with the construction foreman.
3. **The Point of the Natural Selection Analogy:** Ezra got stuck on the detail that "we can't talk to natural selection." Yudkowsky's point wasn't about communication. It was about *inevitable goal drift*. He used humans (who use birth control or non-caloric sweeteners) as an example of an intelligent "creation" that *inevitably* became "misaligned" with its creator's (natural selection's) original "goal" (gene propagation). He's arguing the same will happen with us and AI, but AI will be infinitely more powerful.
In short, Ezra was looking for a path to *coexist* and *solve* the problem, while Yudkowsky was trying to explain that the very nature of superintelligence means coexistence is impossible.
youtube
AI Governance
2025-10-19T06:4…
♥ 61
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugy0XbWExcw1UATlrCZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxdd5LhOa4BqgfiVYJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwXhVfzoMiIICL_VrJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugxn71dq8OryH5hEXGx4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwPVQ3YuLhG9kEyktR4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxVF3BV6PccJUZLrGh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxCuyjxFjBVQedL6wB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwnSbFOhb6ZTUZkOSF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugwt6IhKX3j2vNFEkI54AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwlkvU-6T5Cs7xrl5d4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"}
]