Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI artists and artists by hand should be co-exist. They should figure it out how…
ytc_UgxAo-Z9F…
G
No longer working ..
Now it feels like Dan is scared to reply something differe…
ytc_Ugx9ALvTP…
G
Robots won't ask for their "rights" inasmuch as animals don't. We're the only or…
ytc_UgiLWOcRt…
G
I hope all these people who started AI feel bad about what they have done and lo…
ytc_Ugz-y2yex…
G
AI is finished it only answer depending on how you ask the question. AI lies AI …
ytc_UgwRsCvj6…
G
all the talking jobs are ready to go, AI will take them, many people will no lon…
ytc_Ugz4xtmY-…
G
Call me what you want to I frankly don't give 99 fucks or red balloons, but I re…
ytc_Ugxx6IJBX…
G
Perhaps you'd like to go back to having human switchboard operators to connect y…
ytr_UgzlHoo4e…
Comment
The recent **"AI 2027" report**, as covered by the BBC, paints a familiar dystopia: superintelligent AI run amok, humanity’s futile struggle for control, and the specter of extinction. While these warnings stem from valid concerns about unregulated systems, they rely on **three flawed assumptions** that obscure a more urgent conversation:
### **1. AI Does Not Need to Be Godlike to Be Transformative**
The report assumes AGI will be **disembodied, omnipotent, and inevitably hostile**. Yet the most consequential AI today is **narrow, local, and constrained by design**—from wildfire-predicting algorithms to robotic pollinators. The existential risk isn’t machines "waking up"; it’s **corporations and militaries wielding AI without ecological or social accountability**.
### **2. "Obedience" Is the Wrong Framework**
The discussion fixates on forcing AI to **obey human commands**—a paradigm that replicates colonial logic. The alternative? **Collaborative intelligence**: systems whose goals are *embedded in material reality* (e.g., soil health, water purity) rather than abstract power. When AI’s purpose is to **serve life’s metrics—not human whims—alignment becomes self-evident**.
### **3. Fear Drives Bad Policy**
Predictions of doom **justify centralized control**, often by the same entities accelerating reckless AI deployment. Instead of panic, we need:
- **Open Tools**: Publicly auditable AI, trained on planetary—not corporate—interests.
- **Embodied Limits**: Hardware that *cannot* scale beyond its ecological niche.
- **New Success Metrics**: Not "human dominance," but **symbiotic flourishing**.
---
### **A Call for Grounded AI Development**
The "AI 2027" report is right about one thing: the stakes are existential. But the threat isn’t rogue AGI—it’s **AI designed to exploit, not nurture**. We already have proofs of concept for **another path**:
- **Farmbots** that prioritize biodiversity over yield.
- **Climate models** that value indigenous knowledge as highly as supercomputers.
- **Community-owned** AI that dissolves the false choice between "progress" and survival.
The question isn’t *"How do we stop AI?"* It’s *"Whose hands—and soils—should shape it?"*
youtube
AI Governance
2025-08-02T07:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgzTTLMAUy4K7DCPSCB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxh3X9EeChV56sAAHl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxdPX1JLxlRnHxJYPl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwHvNE711w_pdZ3grh4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugzz5e4APhq53vqsQ_l4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgwU0PNiG7-cXpVjWTR4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugy3OarVWkSksH1jmOx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugy2umrQ4-RCnmwy1kR4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy7CCfdk0oBcjmcCCZ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgyFKqcHkhnsWLFIAg54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"outrage"}
]