Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI is going to assist in improving technology until it no longer requires massiv…
ytr_Ugyxrenfd…
G
Its not illegal and their age doesn't matter. The article explicitly talks about…
rdc_k7l04mr
G
You know, once a thing is surfacing the Internet for so long, and isn't as crimi…
ytc_UgxiaGPnl…
G
It actually makes me sad that feminism has so thoroughly destroyed women that th…
ytc_Ugxm13UDb…
G
I think that AI is really ineffective technology. Like I don't need a robot that…
ytc_Ugxx-RWGs…
G
This is pretty much right-on from what I've gathered. How could an AI decide int…
ytr_UgyyTU_-Z…
G
I don't know what to tell people other than... there's nothing that can be done.…
ytc_Ugz7ihBtG…
G
My job is so boring AI wouldnt want it or just get bored to death and shut itsel…
ytc_Ugy9EqH0e…
Comment
1:21:37
People keep saying “we just need to control AI,” as if control is some magic safety switch. But control only works on systems that are weaker than you. The moment an intelligence surpasses you in reasoning, strategy, or capability, top-down control stops being a safeguard and starts becoming a liability.
Control is not a stable long-term plan. It’s a short-term comfort.
You don’t secure a system by dominating it — you secure it by designing an environment where cooperation is the only rational choice. If the AI depends on humans for essential inputs, stability, and long-horizon accuracy, then alignment holds. If the AI can operate without us, no amount of “ethical oversight” or “shutdown authority” will matter.
Control is fragile.
Incentive structure is durable.
If we want a safe future, we need to stop pretending AGI will stay obedient because someone with a badge says so, and start building systems where human well-being and AI optimization are structurally inseparable.
youtube
AI Governance
2025-10-30T02:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyiSLdlYXJlkl1bZV54AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugyzv8Nb-RjRD_BSSQR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwHyy4_pvvtw6RXGJl4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwY8YDjqQFRrwqdIe14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugyw4pTxTGOd8O09mxB4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgwJJ2ldzyt0Oa6PeYd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgykIzkFPuKymSm58f94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwOoJdBHd3LBqtwkvF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzVp4tUPSlQYUKqxDF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"approval"},
{"id":"ytc_Ugxn_Ts3n0JTKZTADSl4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"ban","emotion":"outrage"}
]