Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@zoravar.k7904 the reason I had for using a server that way is because if every …
ytr_UghSFLJx8…
G
Your flaw is that the assets this wealthy oligarchy owns are stocks and real est…
ytc_Ugz3UpN5g…
G
You have to ask why, while they instill all this fear about AI, they are the one…
ytc_UgwElWTxL…
G
AI can tell you about its own risk, just ask it. He doesn’t have to talk about i…
ytr_UgwxRygbZ…
G
And then - AI learns magic and starts twisting the fabric of reality in 12 dimen…
ytc_UgwSqKZIe…
G
It's pretty ridiculous. I used Google lens also. It was a pic of the end of a ca…
ytr_UgwhelrLU…
G
Samsung has forced integration of ai on my phone. I disabled it in every way pos…
ytc_UgxQ3zP6P…
G
Pasco county res. Heard cursary mention of predictive policing before but had no…
ytc_UgxIbL6N6…
Comment
## What I Think About This
I’ll be direct about where I see strength and weakness in his argument:
### Where he’s right
**The capabilities vs. safety gap is real.** I worked in AI academically — I can confirm that progress in alignment is fragmented, full of patches, while capabilities advance at a brutal pace. This isn’t alarmism, it’s technical observation.
**The impossibility of “turning it off”** is an underestimated point. Distributed systems, with backups, running across multiple jurisdictions — there’s no red button. Anyone who says “just turn it off” doesn’t understand systems architecture.
**Misaligned incentives.** Companies have fiduciary duty to investors, not to humanity. This is structural, not individual malice.
### Where he exaggerates or oversimplifies
**Overly precise timelines.** AGI by 2027 assumes a specific definition and ignores non-computational bottlenecks. 2-3 year predictions for civilizational transformations have a poor track record.
**Absolute impossibility of control.** He treats as a proven theorem what is actually a strong conjecture. Complex systems often have emergent properties that allow for indirect control.
**Simulation with “near certainty.”** The argument is logically interesting, but “near certainty” is epistemically arrogant. It’s a non-falsifiable hypothesis.
### My read
He’s useful as a counterweight to Silicon Valley’s irresponsible optimism. But the absolute apocalyptic tone can also be counterproductive — if “there’s nothing to be done,” why try?
The practical point for you: risk diversification, not paralysis.
youtube
AI Governance
2025-11-30T23:0…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugwr_iPRH1omQfPK8Sd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy7x-G_iAwLkAvRPH14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwdDZwCwu53WArgbAp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyRyFeC5L_sGS8NJ5J4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxbvRdWNWDTwLo8hGh4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgzmpaPB0vb1z09Cd294AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzFeboKLKOqPl94-xt4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyHgYOab37h24WavUd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwATkudRPsdIcJgROp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwxO7nraVNSEfXTIMx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"mixed"}
]