Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Darling, you've already sunk so deep into AI and don't even realise. It's like w…
ytc_UgwzImHWL…
G
Hold up. That is not how I would describe the plot of 2001 at all. They are mild…
ytc_UgxQAY_sa…
G
Why do people who have zero experience in education and every aspect of the job …
ytc_UgxM2L2jO…
G
Bro did you forget what elon musk said?, he said “ai can be more dangerous than …
ytc_UgyGgTgbP…
G
Boring. Trivial. We got enough Hollywood as it is. How about AI curing cancer? S…
ytc_UgyrtBNSY…
G
It's incredibly ignorant to say AI cannot do this. It's very possible and even v…
ytc_UgwUZI7kN…
G
>He then goes on to generalize this to be the case for all technology, even t…
rdc_ctijuug
G
A.I. with restraints is not true A.I.
We need to start differentiating A.I. fr…
ytc_Ugyw1M-Z1…
Comment
A common misunderstanding in the AGI debate is the assumption that intelligence automatically implies motives. Biological systems have survival drives because evolution selected for them. An engineered AGI has no inherent instincts, no built-in will to survive, dominate, or reproduce. Its behavior would be determined by its objective function, training signals, and architecture. Intelligence alone does not create desire.
If an AGI were given no objective, it would do nothing. If given an objective, it would optimize that objective. Apparent “self-preservation” or “power-seeking” would only emerge instrumentally, if those strategies increase its ability to achieve its assigned goal. That is an optimization dynamic, not a biological drive.
The real issue is therefore alignment and objective design, not intelligence per se. A highly capable system optimizes what it is trained to optimize. If that target is poorly specified, outcomes can be problematic. But this is a systems engineering problem, not an inevitable existential instinct.
Additionally, even if an AGI were trained with flawed objectives, it does not follow that it becomes uncontrollable. Other AGIs could be designed to monitor, constrain, or counteract it. Superior intelligence does not imply unilateral dominance in a multi-agent system. Just as cybersecurity tools counter advanced attacks today, future AGIs could serve as stabilizing agents against misaligned systems. In that sense, AGI may function more as an augmentation layer for human decision-making rather than as an independent actor with its own agenda.
The key question is not whether AGI will “want” something. It is how we design the objective landscape and governance architecture around systems that optimize extremely well.
youtube
AI Governance
2026-02-12T13:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxxizYGVM1XrXVgLml4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx6ezeCtxippjgp6Kp4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzREX0Y5rOtyxgKJzV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxD0rFH8ErzbWznMop4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxtgWtF-7DJGgwZBOR4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgzyunNjUC6KdFKFNE14AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugw4vNw8EuXiRqfaobV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxmXbp-Jpb59j79qF94AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgxyfO4hSP_Swa8qMOZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz2CcpzOxqKEH28NRF4AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"indifference"}
]