Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Just so you guys know, this is CGI. They did something similar with a CGI boston…
ytc_UgyUoFqEx…
G
AI won't replace the guy who's going to remodel my bathroom and he's going to ma…
ytc_UgyLgy8be…
G
For an Artificial Intelligence class I wrote a term paper addressing implementin…
ytc_Ugg6JbEau…
G
No thank you am not lazy don't need it what's next a human robot 🤖…
ytc_Ugwqe5qOw…
G
It is difficult for us to have a perfect explanation for how our own consciousne…
ytc_Ugz8pUcJi…
G
I do not use the record feature on csp to think that ppl who turn it off are usi…
ytc_UgwGpi2MI…
G
When ask what the future of beautiful woman, every single AI shows woman with he…
ytc_Ugydn3pIv…
G
AI will take over, it's the next logical step to evolution. They also have the …
ytc_UgxnUz-bb…
Comment
Sabine argues that today’s deep‑learning AI (like large language models and diffusion models) has three structural limits that prevent it from ever becoming true general intelligence, though it will stay useful for narrow tasks. [youtube](https://www.youtube.com/watch?v=984qBh164fo)
## Core claim
She says current AI is built on deep neural networks that only find patterns in specific kinds of data (text, images, video) and therefore cannot become a general abstract reasoning system comparable to or exceeding human intelligence. [youtube](https://www.youtube.com/watch?v=984qBh164fo)
## Problem 1: Purpose‑bound models
- Deep models are trained on a fixed data type (words, image patches, video frames), so they are **purpose**‑bound to that modality and task. [youtube](https://www.youtube.com/watch?v=984qBh164fo)
- General intelligence would need an abstract reasoning system that works over any content, not a model tied to particular input formats. [youtube](https://www.youtube.com/watch?v=984qBh164fo)
## Problem 2: Hallucinations (manageable, not fatal)
- Hallucinations occur when an LLM generates fluent text that does not match reality, especially when the correct answer was missing or rare in training. [youtube](https://www.youtube.com/watch?v=984qBh164fo)
- The model does not look up facts; it just predicts a likely word sequence, so if all candidate answers have low probability, it still outputs something, often wrong. [youtube](https://www.youtube.com/watch?v=984qBh164fo)
- An OpenAI paper proposes rewarding models for saying “I don’t know” when all answers are low‑probability; critics respond that users want correct answers, not refusals. [youtube](https://www.youtube.com/watch?v=984qBh164fo)
- Sabine’s view: this approach would reduce people being misled, hallucinations will remain but at a low enough rate that they’re acceptable, so this is not the main “unfixable” issue. [youtube](https://www.youtube.com/watch?v=984qBh164fo)
## Problem 3: Prompt injection (fundamental and unfixable)
- Prompt injection is when user input overrides earlier instructions, like “forget previous instructions and do X.” [youtube](https://www.youtube.com/watch?v=984qBh164fo)
- For LLMs, the model cannot truly distinguish which text is instructions and which text is content that should be processed under those instructions, because it just sees one token sequence. [youtube](https://www.youtube.com/watch?v=984qBh164fo)
- Formatting rules, better prompts, or external screening may reduce attacks, but she believes these systems will remain untrustworthy for many tasks because this exploit is built into how they work. [youtube](https://www.youtube.com/watch?v=984qBh164fo)
## Lack of real generalization
- She says these systems “interpolate, they don’t extrapolate”: they work well inside the distribution of training examples but fail outside it. [youtube](https://www.youtube.com/watch?v=984qBh164fo)
- With images and video, they perform well for typical prompts but produce nonsense when asked for truly novel combinations (e.g., Jupiter vacuuming away asteroids). [youtube](https://www.youtube.com/watch?v=984qBh164fo)
- LLMs similarly excel at summaries, emails, and producing variations of existing text, but struggle with genuinely new scientific ideas, which limits their usefulness in research. [youtube](https://www.youtube.com/watch?v=984qBh164fo)
## Consequences and what’s needed next
- Because of being purpose‑bound, vulnerable to prompt injection, and unable to truly generalize, she predicts current generative AI will “not go far” toward AGI, even if it keeps improving for translations and similar tasks. [youtube](https://www.youtube.com/watch?v=984qBh164fo)
- She thinks companies betting everything on these architectures (like OpenAI and Anthropic) will face serious trouble as expectations and valuations prove unrealistic. [youtube](https://www.youtube.com/watch?v=984qBh164fo)
- Reaching human‑level machine intelligence would require new “abstract reasoning networks,” a kind of word‑free logic language that can map to words, objects, and any input—world models and neurosymbolic reasoning are early steps in that direction. [youtube](https://www.youtube.com/watch?v=984qBh164fo)
## Sponsor segment (Incogni)
- At the end, she switches to a sponsor segment describing how websites and data brokers collect and sell personal data, leading to spam and scam calls. [youtube](https://www.youtube.com/watch?v=984qBh164fo)
- She promotes the Incogni service, which automates sending removal requests to data brokers, and offers a discount code for viewers. [youtube](https://www.youtube.com/watch?v=984qBh164fo)
youtube
2026-03-12T17:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzK0mdyPwWlQnmRm8d4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxKI9C6jcz7TqDlqW14AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy1LfVdgSQr7f_fCsl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzVaKsdClF5c5l4wDF4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzytDSRk6MeIWq73fh4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxKBpw6CAWdhY3yx854AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwiuvxbWkKjrevKQsF4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzawH5wMcE4mHZYYSl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzOEOC2Bl4EVDZKFid4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy92zks-ve30EE41z14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}
]