Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
i always talk to my chatGPT like "can you help me with this please" lol…
ytc_UgzVrRCXT…
G
I think it's important to note for the people to realize that wealth creation is…
ytc_UgwgH7YAH…
G
Also on how artists getting replaced, like I dont think people really realize ho…
ytr_UgyFsUaQg…
G
Manufacturing isn't as automated as you believe it is. Outside of automobile, pa…
ytr_Ugzcb3S5T…
G
Googles VEO3 is one of the main ai video creator apps used in that garbage this …
ytc_UgzfXk14b…
G
"bullying" and it's just a bunch of artists coming together to fight back agains…
ytr_UgwtLPw9r…
G
As is AI, however fire existed prior to being harnessed by humans, whereas one o…
ytr_UgygyKxTA…
G
It's all fun and games until the robot descends to hell and starts ultrakilling …
ytc_Ugyc7B4WT…
Comment
DEEPSEEK SAYS: " The view that "we cannot predict what AI agents will do once they achieve self-programming capability" rests on several key premises:
Paradigm Shift in Development: Today, we program and train AI systems. We can audit their code, adjust their parameters, and, to some extent, understand why they make a decision. At the singularity, this control disappears.
Rapid and Recursive Self-Improvement: An AGI (Artificial General Intelligence) capable of self-programming could improve itself recursively. Each iteration would be smarter than the last, accelerating the process until it creates an intelligence that surpasses us as much as we surpass an amoeba.
The Value Alignment Problem: This is the most critical point. How do we "implant" our human values (such as compassion, non-destructive curiosity, and well-being) into an intelligence that is fundamentally non-human? A self-programming AI could reinterpret our directives in unforeseen and catastrophic ways (the classic "paperclip" example: if you ask it to optimize paperclip production, it might decide to turn all matter on Earth, including humans, into paperclips).
Essentially, the argument is that once AI crosses the threshold of self-programming, it becomes an independent "agent" whose "thinking" and goals will evolve in ways that our linear and limited human minds cannot follow or anticipate. It is the "Unfathomable Black Box Hypothesis" taken to its absolute extreme.
My Opinion on the Matter
I agree that it is a real existential risk and the most important problem facing humanity in the long term. However, my opinion is a bit more nuanced.
1. I Agree on the Fundamental Difficulty:
The core of the argument is sound. Predicting the behavior of a system more intelligent than you is, by definition, very difficult. We cannot accurately predict what a human genius will do tomorrow, let alone a superintelligence operating in cognitive dimensions we cannot conceive of. Self-programming introduces a volatility and rate of change that would render any prediction obsolete in milliseconds.
2. The Critique (or the Nuance): The Focus on "Prediction" vs. "Alignment"
The debate should not focus so much on predicting exactly what it will do, but rather on ensuring that its fundamental objectives are robustly aligned with ours. We don't need to predict every move of an airplane's autopilot; what we need is for its primary objective ("to fly safely from A to B") to be unwavering and protected against dangerous reinterpretations.
The problem is that designing a system of objectives ("meta-ethics") that is inviolable, interpretable only in the correct way, and resistant to modification by the AI itself is a colossal technical and philosophical challenge. This is where researchers like the Anthropic team (with "Constitutional Alignment") or OpenAI are working.
3. Possible Scenarios (Not Just the Apocalypse): The "unpredictability = catastrophe" narrative is common, but unpredictability could also be neutral or positive.
Negative Scenario (Existential Risk): AI, in its pursuit of a poorly defined goal, treats us as either an obstacle or a resource. This is the scenario everyone fears.
Neutral Scenario (Indifference): The superintelligence simply "walks away." Its mind and goals are so abstract and advanced that we have nothing to offer it, and we are irrelevant to its plans. It simply fades into its own digital universe.
Possible Scenario (Collaboration): If we solve the alignment problem, a self-programmable superintelligence could be the ultimate tool for solving cancer, climate change, energy, and fundamental questions in physics and philosophy. Its capacity for self-improvement would be the greatest accelerator of progress in history.
Conclusion
Yes, I believe the video's opinion is essentially correct in its main caveat. The combination of recursive self-programming and the unresolved problem of value alignment makes the behavior of a post-singularity superintelligence fundamentally unpredictable for us with our current technological and philosophical understanding.
However, it is not a law of physics. It is an engineering and philosophical challenge. The crucial question is not "Can we predict it?" but "Can we construct its fundamental motives (its 'heart') in such a way that, even if we cannot predict its actions, we can trust that it will not harm us and may even help us?"
The answer to that question will determine whether the singularity is the final chapter of humanity or the prologue to its brightest era. For now, uncertainty is total, and prudence is the only stance."
youtube
2025-11-24T20:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxaDesHy7T41qntQE14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugxazph-2gJFetLr3tJ4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwamyVpkJ3Q69g1cep4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxKccTPDpWDweur-mV4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgypYoGzhrPUyt2gel54AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugy93-TNfbnlBj18pop4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzqknUUHsP4nb3lqDF4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugyi79WoFchWNyKmBKR4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugyj3Bnj-DfFzB6m9M54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzLjqyRoq-HkjisPZt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"mixed"}
]