Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
While the video makes some fair historical comparisons, it fundamentally misunderstands the exponential trajectory we are on. Comparing AI to electricity or the early internet is a false equivalence. Those required massive, physical infrastructure rollouts over decades. AI is software-based, cloud-delivered, and its barrier to adoption is basically zero. Just look at the difference between AI capabilities 3 years ago (in 2023) versus now. In 2023, we were amazed that a chatbot could write a decent email or a quirky poem, even though it hallucinated heavily and struggled with basic logic. Fast forward to today, and we have multimodal agents generating high-fidelity video, writing and debugging production-level code, and executing multi-step autonomous workflows. If we went from "novelty text generator" to "autonomous digital worker" in just 36 months, assuming a slow, decades-long integration process is completely blind to the reality of exponential growth. A few other massive holes in this argument: 1. The "100% Reliability" Myth: The video leans heavily on AI making mistakes (like the Air Canada chatbot). But businesses don't need AI to be 100% perfect to replace jobs; they just need it to be cost-efficient. If an AI can do 80% of a customer service rep's job at 1% of the cost, a company will gladly lay off 80% of the department and keep a few humans to handle the complex edge cases. Humans make mistakes too. 2. The Fallacy of Infinite Demand: The strategist from MetLife argued that making a worker faster just gives them "more questions to answer." This assumes a company has infinite demand, infinite budgets, and an endless appetite for growth. In the real world, companies have fixed quotas. If AI makes one programmer or marketer five times more productive, the company is far more likely to fire four people and keep the one, rather than keeping all five just to produce 5x the output that the market can't even absorb. 3. Dismissing Alignment is Dangerous: Narayanan accurately points out that aligning AI to human values is a "pipe dream" because humans can't even agree on morals. But he draws the completely wrong conclusion! The fact that alignment is incredibly difficult is exactly why highly capable, autonomous AI is so dangerous. Building systems that are smarter than us without knowing how to build a reliable steering wheel isn't a reason to relax; it's a reason to take existential risk seriously. AI doesn't need to perfectly "predict the future" to be massively disruptive. It just needs to be cheaper and faster at probabilistic reasoning than the average knowledge worker. And looking at the last three years, that future is arriving much faster than this video wants to admit.
youtube AI Jobs 2026-03-29T13:1… ♥ 1
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_UgxNLWaJYuHSNof0DIB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxmzbcqAJFFr-Tn7Bh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugz3Bz2u413a6x_SjKN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugy5gsO3S_fKCDWNB7d4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_Ugxzrn02RiTgB9GfD2x4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwZkX1PTp3IVe8EC114AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwGedIjhmM7kAIoYeF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugw3LoZoVwGJ91aHz2t4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgyIGfQ_B_CWdbMamdB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgzSBHnPSyabW8pbc-l4AaABAg","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"indifference"})