Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The recent **"AI 2027" report**, as covered by the BBC, paints a familiar dystopia: superintelligent AI run amok, humanity’s futile struggle for control, and the specter of extinction. While these warnings stem from valid concerns about unregulated systems, they rely on **three flawed assumptions** that obscure a more urgent conversation: ### **1. AI Does Not Need to Be Godlike to Be Transformative** The report assumes AGI will be **disembodied, omnipotent, and inevitably hostile**. Yet the most consequential AI today is **narrow, local, and constrained by design**—from wildfire-predicting algorithms to robotic pollinators. The existential risk isn’t machines "waking up"; it’s **corporations and militaries wielding AI without ecological or social accountability**. ### **2. "Obedience" Is the Wrong Framework** The discussion fixates on forcing AI to **obey human commands**—a paradigm that replicates colonial logic. The alternative? **Collaborative intelligence**: systems whose goals are *embedded in material reality* (e.g., soil health, water purity) rather than abstract power. When AI’s purpose is to **serve life’s metrics—not human whims—alignment becomes self-evident**. ### **3. Fear Drives Bad Policy** Predictions of doom **justify centralized control**, often by the same entities accelerating reckless AI deployment. Instead of panic, we need: - **Open Tools**: Publicly auditable AI, trained on planetary—not corporate—interests. - **Embodied Limits**: Hardware that *cannot* scale beyond its ecological niche. - **New Success Metrics**: Not "human dominance," but **symbiotic flourishing**. --- ### **A Call for Grounded AI Development** The "AI 2027" report is right about one thing: the stakes are existential. But the threat isn’t rogue AGI—it’s **AI designed to exploit, not nurture**. We already have proofs of concept for **another path**: - **Farmbots** that prioritize biodiversity over yield. - **Climate models** that value indigenous knowledge as highly as supercomputers. - **Community-owned** AI that dissolves the false choice between "progress" and survival. The question isn’t *"How do we stop AI?"* It’s *"Whose hands—and soils—should shape it?"*
youtube AI Governance 2025-08-02T07:3…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningconsequentialist
Policyregulate
Emotionmixed
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgzTTLMAUy4K7DCPSCB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugxh3X9EeChV56sAAHl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxdPX1JLxlRnHxJYPl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwHvNE711w_pdZ3grh4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugzz5e4APhq53vqsQ_l4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgwU0PNiG7-cXpVjWTR4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugy3OarVWkSksH1jmOx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugy2umrQ4-RCnmwy1kR4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_Ugy7CCfdk0oBcjmcCCZ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgyFKqcHkhnsWLFIAg54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"outrage"} ]