Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
You just Quote the Gartner hype cycle and assume this is the same with some anec…
ytc_Ugzpg6lZD…
G
Fun Fact: Chat GPT uses enough electricity to power 80 homes for an entire year.…
ytc_UgwemLTZV…
G
What happens when you have two separate AI 's talking to each other? Ask one abo…
ytc_UgwIA9s0p…
G
As Francis of the Filthy tactfully puts it, "NOBODY CARES ABOUT YOUR ROBOT FANFI…
ytc_Ughp8rMgX…
G
AI if asked
could solve the pollution question , by Not using traditional meth…
ytc_UgzEDeHfQ…
G
A.I is out of control.
It needs to be shut down, regulations put in place, then …
ytc_UgznsGhgw…
G
William Gibson wrote Neuromancer in the 80s, which featured an AI hiring a group…
rdc_o3hizzk
G
Man: hey babe!
Woman: not even in your dreams
Robot: not even in your dreams🤖…
ytc_UgzED6EQM…
Comment
What’s striking about this entire discussion is not that anyone disagrees about the power of AI—it’s that they are all, in different ways, trying to smuggle in assumptions about control, responsibility, and inevitability without ever quite admitting how fragile those assumptions are.
You have Eric Schmidt calmly reassuring us that these systems are “just next-word prediction,” as though the scale and emergent behavior of those predictions don’t fundamentally alter what that means in practice. That’s a bit like saying a hurricane is just moving air molecules—technically true, but entirely beside the point. The issue is not the mechanism; it’s the consequences when that mechanism operates at planetary scale.
At the same time, you have voices like Nate Soares warning about superintelligence as if it’s an almost metaphysical inevitability—something that, once crossed, leads directly to extinction. That argument rests on a chain of assumptions so long and so fragile that it begins to resemble theology more than engineering. It is not impossible, but it is also not inevitable, and presenting it as such risks turning speculation into dogma.
Then there’s the far more grounded critique from Kate Crawford and Latanya Sweeney, which is, frankly, harder to dismiss: AI is not an abstract intelligence floating in a vacuum—it is a product, built inside corporations, trained on biased data, and deployed into a world with existing inequalities. That is not a hypothetical future risk; that is a present reality. When an AI system reflects bias, it is not “evil math”—it is a mirror of the systems that produced it, amplified by scale and speed.
And here lies the real tension that no one on that stage fully resolves: the mismatch between how fast the technology evolves and how slowly institutions adapt. As Sweeney points out, policy moves in years, while AI moves in months. That gap is not a minor inconvenience—it is the central problem. You cannot meaningfully govern a system that changes faster than your ability to understand it.
But perhaps the most revealing moment is the quiet contradiction running through the entire panel: everyone agrees these systems are incredibly powerful, yet many still insist they are fundamentally controllable. That confidence rests on the assumption that because humans built the system, humans remain in charge of it. History suggests otherwise. We routinely build systems—financial, technological, political—that exceed our ability to fully predict or manage their behavior.
The real danger, then, is not some cinematic “Terminator” scenario. It is something far more mundane and therefore far more likely: a gradual erosion of accountability, where decisions are increasingly delegated to systems that no single person fully understands, in service of incentives—profit, efficiency, competition—that no one has truly aligned with the public good.
In other words, the problem isn’t that AI might suddenly become too intelligent.
It’s that we might continue deploying it long before we’ve proven ourselves wise enough to handle it.
youtube
AI Governance
2026-03-26T01:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgyY9ISvt41VWhB07sl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwmiuTE7f9CMltAYhh4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzopRRhZ45NAVyQTCF4AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_UgwMMRrux9L5OrnBLqF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy4yPrlGeN8KS4YT3l4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzLpRaatw062W6gknx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugx6crD-k1uUs8ySInh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx77922bRRhBEvNv9t4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwY_mpjJnb33DXS3WV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyLa5_YvVmKcWthrGR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]