Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
🤨I wonder if some of these people posting videos on YouTube, Tiktok, Instagram, …
ytc_UgyPgv3E_…
G
The ai tried to save him.. where from you got to conclusion of blaming the ai?…
ytc_UgxB_urgV…
G
You do realize many AI models are run locally on the computer with no interferen…
ytr_UgxGusIZj…
G
AI "artist" are scum, they openly admit to stealing while making excuse after ex…
ytc_Ugzp8ZF9V…
G
Human beings are not special, we have no "creativity" that cannot be copied. Lik…
ytc_UgwAugQyT…
G
When this came out, I was like "bs. he's lying.", but ever since chatGPT was rel…
ytc_Ugx4bfx3E…
G
AI, When googly has context on their videos, it usually comes from bad informati…
ytc_UgyUKe1NR…
G
AI has ready proven to be a potential danger to humanity when chat GPT encourage…
ytc_UgwsiW0DG…
Comment
Great video Dave, but I want to push back on a few core claims in an evidence minded way, because I think the existential argument you presented leans heavily on speculation rather than on what we actually know today.
First, recursive self improvement (RSI) is an assumption, not a demonstrated fact. Current models do not autonomously redesign their own architectures, fix their own hardware bottlenecks, or expand global power and materials. Labs are experimenting with automating parts of research, but that is a long way from a closed loop that reliably improves itself forever. Treating RSI as inevitable is a huge logical leap.
Second, today’s systems are fundamentally pattern predictors. They can generate convincing outputs, but they do not have human style understanding, goals, or desires. That is why they hallucinate, why they fail at long chains of causal reasoning, and why they can behave strangely in contrived tests. Those odd behaviors are real and worrying, but they do not prove that a runaway, goal seeking mind already exists or will necessarily emerge.
Third, many of the dramatic experiments you showed are deliberately contrived to probe failure modes. Those studies are valuable, but they show what can happen under extreme stress tests, not what will happen in ordinary deployment. It is correct and useful to study these scenarios, but we should not convert every stress test into a prediction of doom.
Fourth, physical realities matter. Building and maintaining orders of magnitude more compute requires enormous energy, cooling, minerals, and logistics. You cannot simply will infinite compute into being. Robot armies maintaining datacenters still need raw materials, power plants, and supply chains. Real world constraints impose hard ceilings and diminishing returns that are routinely underplayed in doomsday narratives.
Fifth, there are fundamental trade offs in intelligence and systems design. Being extremely good at one domain often reduces competence in others. Taste, creativity, social intuition and embodied experience are hard. You can optimize for research performance or coding, but that does not automatically give you the full, robust generality the term AGI implies.
Sixth, the most useful course of action is dual track. Take safety seriously in research and push for binding, enforceable regulation and export controls. At the same time, focus on the harms that are already happening today: surveillance, misinformation, job disruption, environmental cost, and concentration of power.
In short, the extinction argument is worth studying, but it should not crowd out clear policies for present risks. RSI and superintelligence are hypotheses, not facts. Treat them as scenarios to plan for, not as guaranteed outcomes. Right now we get the most leverage by fixing current governance, transparency, and incentive problems so that if and when bigger technical leaps occur, society is prepared and not forced into panic decisions.
youtube
AI Governance
2025-08-26T16:3…
♥ 549
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_UgzPMHmrsuxd7n_ZoNZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxbOtNgHVoWjn9HzkZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy5KxzzQk9g8DMMM4V4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugx1ftnlzws5Z9HJAIR4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugwa7N7bz0JkfVq4S6t4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}
]