Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Let's not forget @ 7:19 (should have been the whole segment) Anderson mentioning, that this is the same CEO who's AI in multiple safety sims over months has lied, deceived, used blackmail and in a independent test, threatened developers with harm. The simple fact that Dario tries to explain this away with tech safety jargon should be your first clue that he is for profit. I encourage you to look at the results from the independent study not just his "extreme" testing. Because this argument can be weaponized to justify a reckless sprint. It’s like saying, “If I don’t build this nuke, the other guy will, so I might as well.” It sidesteps the question of whether the tech should be built at all or at what pace. It’s a moral cop-out if it’s used to dismiss legitimate concerns about job displacement, bias in AI systems, or existential risks. The “everyone’s doing it” logic doesn’t absolve you of responsibility for your own choices—it just shifts the blame. If Anthropic or OpenAI truly believe unchecked AI could tank economies or worse, saying “China tho” feels like a dodge to keep the venture capital flowing and avoid tough ethical calls. Now, let’s be real: stopping entirely isn’t practical. AI’s not a single invention you can unmake—it’s a sprawling field with millions of researchers, startups, and open-source projects worldwide. Even if U.S. labs froze, progress would continue elsewhere, not just in China but in Europe, India, or even rogue basement coders. The genie’s half out of the bottle. But that doesn’t mean the race has to be a free-for-all. There’s a middle ground—coordinated regulation, international agreements, or at least some self-imposed restraint—that could slow things down enough to address risks like mass unemployment or AI misuse without ceding the field entirely. The childish part you mentioned? Spot-on. It’s like two kids saying, “He started it!” instead of owning their actions. Execs like Dario Amodei aren’t wrong that global competition exists, but leaning on it as the sole reason to charge ahead ignores their agency. They could push for safer AI, advocate for global standards, or prioritize applications that don’t screw over workers. Instead, the “China boogeyman” line often feels like a way to rally public support, scare regulators into backing off, and keep the hype train rolling.
youtube AI Jobs 2025-06-01T12:0…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningvirtue
Policyliability
Emotionoutrage
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_Ugwj_XJc049ZQQNpH9Z4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_Ugzg6EVNNbVm96o7b5t4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzfSZqjMHuu-Z6P6_R4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyI2bL74Vtw1blebO14AaABAg","responsibility":"company","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugwqzu9DA1x8_44qI2V4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"} ]