Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
## What I Think About This I’ll be direct about where I see strength and weakness in his argument: ### Where he’s right **The capabilities vs. safety gap is real.** I worked in AI academically — I can confirm that progress in alignment is fragmented, full of patches, while capabilities advance at a brutal pace. This isn’t alarmism, it’s technical observation. **The impossibility of “turning it off”** is an underestimated point. Distributed systems, with backups, running across multiple jurisdictions — there’s no red button. Anyone who says “just turn it off” doesn’t understand systems architecture. **Misaligned incentives.** Companies have fiduciary duty to investors, not to humanity. This is structural, not individual malice. ### Where he exaggerates or oversimplifies **Overly precise timelines.** AGI by 2027 assumes a specific definition and ignores non-computational bottlenecks. 2-3 year predictions for civilizational transformations have a poor track record. **Absolute impossibility of control.** He treats as a proven theorem what is actually a strong conjecture. Complex systems often have emergent properties that allow for indirect control. **Simulation with “near certainty.”** The argument is logically interesting, but “near certainty” is epistemically arrogant. It’s a non-falsifiable hypothesis. ### My read He’s useful as a counterweight to Silicon Valley’s irresponsible optimism. But the absolute apocalyptic tone can also be counterproductive — if “there’s nothing to be done,” why try? The practical point for you: risk diversification, not paralysis.
youtube AI Governance 2025-11-30T23:0… ♥ 1
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugwr_iPRH1omQfPK8Sd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy7x-G_iAwLkAvRPH14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwdDZwCwu53WArgbAp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyRyFeC5L_sGS8NJ5J4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxbvRdWNWDTwLo8hGh4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytc_UgzmpaPB0vb1z09Cd294AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzFeboKLKOqPl94-xt4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyHgYOab37h24WavUd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwATkudRPsdIcJgROp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwxO7nraVNSEfXTIMx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"mixed"} ]