Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I want to support artists and they're hatred of AI art... but is this how Skynet…
ytc_UgzSi1xfn…
G
The same objections you have about a i art can be applied to traditional art as…
ytc_Ugy4Vg3fJ…
G
As with corporations, will the AI “inner corporation“ be considered a "legal per…
ytc_UgyJz2QcF…
G
Wow, I'm amazed how similar the AI voice was to yours! Thanks for the helpful ad…
ytc_Ugy2REzwv…
G
if this is so dangerous why do they keep improving the ai and not just shut it d…
ytc_Ugz9x0qZ6…
G
If AI decides to not delete humans, I hope AI robots can learn to wipe human but…
ytc_Ugw7iYonB…
G
Keep learning with us .Stay connected with our channel and team :) . Do subscrib…
ytr_UgwmImJ9w…
G
Bernie Sanders has always stood up for the working class. He is the only politic…
ytc_Ugy3E1rE9…
Comment
Dr. Yampolskiy is absolutely right that ASI is an "unsolvable" problem — but there's a technical name for why: catastrophic Liveware-Software (L-S) interface failure at civilizational scale.
When humans (Liveware) can no longer comprehend or control AI systems (Software), the interface between them collapses. This isn't abstract theory — we've seen this pattern kill people in aviation, and the lessons are chilling.
Through the SHELL-Privacy™ framework (adapted from civil aviation human factors for AI and privacy), Dr. Yampolskiy's warning becomes even clearer:
Interface L-S (Liveware-Software) at civilizational scale:
Liveware: Humanity (8 billion people with limited cognitive capacity)
Software: Artificial Superintelligence (improving exponentially beyond human comprehension)
Interface L-S: Impossible to manage (humans cannot predict, comprehend, or control ASI)
Result: Existential risk
In aviation, we learned this lesson the hard way. Boeing 737 MAX (2018-2019): The MCAS system (automation) fought against pilots (humans) because the interface was poorly designed. Pilots couldn't comprehend what the system was doing. 346 people died. Air France 447 (2009): Autopilot disconnected unexpectedly, pilots were confused by conflicting information, couldn't understand the system's state. 228 people died.
Both were classic L-S interface failures: systems exceeded human ability to comprehend and control them. But here's the terrifying difference: when L-S interface fails in aviation, hundreds die. When L-S interface fails with ASI, 8 billion could die.
Dr. Yampolskiy asks why the smartest people are ignoring this clear existential risk. MEDA-Privacy™ (our framework for blameless investigation of incidents) offers an answer: It's not just "greed" or "stupidity" — it's systemic interface failures forcing reckless decisions.
Interface L-E (Liveware-Environment): The competitive environment (AI arms race) forces reckless decisions. "If we don't do it, China/Russia will." Developers aren't evil — they're operating in an environment that makes dangerous choices seem rational or inevitable.
Interface L-L (Liveware-Liveware): Lack of coordination between AI Safety researchers and AGI developers. "Move fast and break things" culture in tech. Risk normalization ("people always said AI was dangerous, but nothing happened").
Interface L-S (Liveware-Software): Developers can't predict the behavior of systems they create. "Black box" AI models. Impossible to test ASI before creating it — you can't run a safety drill for the end of the world.
In aviation, we learned you can't just build faster, more powerful systems and hope pilots adapt. You must design interfaces that match human capabilities and limitations. But with ASI, we're doing exactly what killed 346 people in 737 MAX crashes: building systems that exceed human ability to comprehend, then hoping we'll figure out control later. We won't.
The difference is that in aviation, after crashes, we can redesign interfaces, retrain pilots, update regulations. With ASI, we don't get a second chance. There's no "737 MAX grounding" for superintelligence. Once it's created and the L-S interface fails, it's over.
Dr. Yampolskiy mentions that even the creators admit they don't know how to make it safe, yet they're proceeding anyway. This is the most damning evidence of systemic interface failure: when the environment (L-E) creates such intense competitive pressure that even people who understand the risk proceed anyway, the system itself is broken.
What can we do? Honestly, I don't know if there's a solution, but SHELL-Privacy™ and MEDA-Privacy™ at least offer a framework for diagnosing WHY this is happening:
We're not just building dangerous AI — we're building it within interfaces (competitive, organizational, technical) that make it impossible to stop, even when we know it's dangerous.
If we can't redesign those interfaces (global coordination to stop AI arms race, organizational cultures that prioritize safety over speed, technical systems that remain comprehensible to humans), then Dr. Yampolskiy is right: we're gambling with 8 billion lives, and the odds are not in our favor.
For those interested in this human factors approach to AI Safety, I wrote about SHELL-Privacy™ and MEDA-Privacy™ in "Applying SHELL-Privacy™ and MEDA-Privacy™ Frameworks across all sectors" (available in EN, PT, ES, FR). Chapter 5 specifically addresses how to identify and redesign dangerous interfaces before they cause catastrophic failures.
Thank you, Dr. Yampolskiy, for this essential warning. And thank you, Steven, for giving him this platform. This is the conversation we need to be having — not just "is AI dangerous?" but "why are we building something we know we can't control?"
youtube
AI Governance
2025-12-14T02:2…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugxa5ByEDwNrftNdBeN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxQLNRCoFVcAmcwETR4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxCWA83kPrOA5W5IH54AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgzzKsXKnsMiK7UIN0Z4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzYlJEGQU9-dmWO-Od4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugz8NR4lH3GDG3kBMJ54AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxM5_wAiiecwXFLxQJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxA8DqJ0Z2yDIFK56x4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_Ugwta7Msvkq4AFYkXdt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzjNcgSLoAyAFC87XN4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}
]