Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I’m just a private citizen, not a tech insider, but I want to add one thought af…
ytc_Ugxcu6FP6…
G
It's sad that this is necessary.
It's even more sad that you can't be sure the m…
ytc_Ugyywv1ha…
G
Okay, master. But nobody has a job. Nobody is hiring, or at least hiring is decr…
ytc_UgyHfmoGZ…
G
"I want AI to do my laundry so I can draw art, not for AI to draw my art so I ca…
ytr_Ugz17S-Sp…
G
+Michael Thomas Walking is slower than driving, unless you are in a traffic jam …
ytr_Uggm5Bdzw…
G
The only thing I learned from Jurassic Park is that making dinosaurs would be fl…
rdc_cthvmgu
G
AI Act contains various important points that must be known by individuals, in a…
ytc_UgzraShlI…
G
Too late to explore the seas, too early to explore the stars. Just in time to ex…
ytc_Ugzrwbt4o…
Comment
Response chatGPT 5.2 to your video ;)
I’ll structure my opinion by addressing the **three “unfixable problems”** the author claims exist, plus the broader conclusion about AGI.
---
## Overall take (short version)
The text raises **real, widely discussed weaknesses** of current AI models, but it **overstates their finality**. None of the problems described are imaginary—but calling them *unfixable* depends on a very narrow definition of “current AI models” and assumes **no architectural evolution**, which history suggests is unlikely.
In other words:
👉 **Good diagnosis, too strong a prognosis.**
---
## 1. “Purpose-bound pattern matching” vs. abstract reasoning
**What the text claims**
Current models are trained to find patterns in specific data types and therefore cannot become general, abstract reasoning systems.
**My opinion**
This is **partly right, partly outdated**.
* It’s true that today’s large models are primarily **statistical learners**, not explicit symbolic reasoners.
* It’s also true that they don’t have an innate, modality-independent “logic core” in the way humans do.
However:
* Research into **world models**, **tool-augmented reasoning**, **planning loops**, and **neuro-symbolic hybrids** directly targets this limitation.
* Empirically, modern models already perform **cross-domain abstraction** better than pure pattern matchers should (e.g., transferring reasoning strategies between domains).
🔎 Key point:
They may not reason *like humans*, but dismissing their abstract capabilities entirely underestimates how abstraction can emerge implicitly from scale + structure.
**Verdict:** Legit concern, not a hard wall.
---
## 2. Hallucinations as a “solvable enough” problem
**What the text claims**
Hallucinations stem from probability completion, not retrieval; full elimination is impossible, but partial mitigation is acceptable.
**My opinion**
This is one of the **strongest parts** of the text.
* The explanation of hallucinations is basically correct.
* The idea that “I don’t know” responses should be rewarded is sensible.
* The critique that users expect certainty is also realistic.
Where I slightly disagree:
* The text frames hallucinations as *inherent* rather than *architectural*.
* Retrieval-augmented systems, verifiers, and multi-model consensus already reduce hallucinations significantly in practice.
🧠 Important nuance:
Hallucinations are a **systems problem**, not just a model problem. When models are embedded in structured pipelines, the issue becomes manageable for many real-world tasks.
**Verdict:** Correct diagnosis, pessimism slightly exaggerated.
---
## 3. Prompt injection as an “unsolvable” flaw
**What the text claims**
Language models fundamentally cannot distinguish between instructions and data, making prompt injection inevitable and disqualifying them from trustworthy use.
**My opinion**
This is **conceptually sharp but overstated**.
* Yes, prompt injection exploits the fact that everything is “just text” to the model.
* Yes, this is a serious security issue today.
But:
* This does **not require the model itself** to solve the problem alone.
* Sandboxing, capability-based access, policy separation, interpreters, and external control layers already reduce injection risk dramatically.
* Humans themselves are also vulnerable to “prompt injection” (social engineering), yet we still trust them with critical tasks—because we add **procedural safeguards**.
🔐 The real mistake is assuming:
> “If the model alone can’t solve it, it’s unsolvable.”
That’s not how secure systems are built.
**Verdict:** Serious issue, but not fundamentally disqualifying.
---
## 4. “Interpolation, not extrapolation”
**What the text claims**
Models cannot generalize outside training distributions; they remix rather than create; this blocks scientific discovery.
**My opinion**
This is **partly true, partly misleading**.
* Models are indeed much better at interpolation than extrapolation.
* Image/video generation failures outside distribution are obvious.
However:
* Humans also struggle badly outside their experiential distribution.
* Scientific progress often comes from **structured exploration**, not raw imagination.
* When models are combined with simulators, search, and feedback loops, they already outperform humans in narrow scientific domains.
🧪 The limitation is real—but it’s not unique to AI, and it’s not absolute.
**Verdict:** Real constraint, but not a showstopper.
---
## Final claim: “Current AI companies are in big trouble”
**What the text claims**
Generative AI revenues will collapse because these systems can’t reach AGI.
**My opinion**
This is where the argument is weakest.
* AGI is **not required** for enormous economic value.
* Translation, coding assistance, design, logistics, drug discovery, and education do not require human-level general intelligence.
* Historically, tools don’t need to replace humans to be transformative.
📉 Overhyping AGI timelines is a problem—but that doesn’t imply imminent collapse.
---
## Bottom line
**I would summarize my opinion as follows:**
* ✅ The text correctly identifies **real weaknesses** in current AI systems.
* ❌ It treats today’s architectures as if they were the *end of the road*.
* ⚠ It conflates “not human-like” with “not viable.”
* 🔄 It underestimates how much **system design**, not just model design, matters.
**I’d rewrite the title as:**
> “Current AI Models Have Serious Structural Limits — But They’re Not the End of the Story.”
youtube
2025-12-30T11:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgxJinD859j9N6NOiC14AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzvEQpplpcLP6U-TJR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxIkV0SwnOUoxMfs_94AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgzbFLQ-yb_iq5BRI_94AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgzTIU7hQ4m01FLiE6Z4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwkI3EYLDgatcuHeX14AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgygNH25v7S9mCXpKpN4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugx_cxyx_TMRycqj07p4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugz5zo8xkK0FGQv1Uj14AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzF7YQGtuoOSjCWFVx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"resignation"}]