Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
0:00 — Can we slow AI down? Competition between nations and companies makes a slowdown unlikely; separate question is long-term safety. 0:37 — Can AI be made safe? Confidence in Ilya’s approach; investors back him based on past breakthroughs (AlexNet, GPT-2 → ChatGPT), though “secret sauce” is opaque. 1:28 — Safety resources concern Company reportedly reduced the promised fraction of compute for safety research; this helped trigger departures. 1:50 — Risk framework pivot → Autonomous weapons noted, then… 1:58 — Joblessness vs. past tech shifts ATMs didn’t erase jobs, but AI looks more like the Industrial Revolution: machines outperform “mundane intellectual labor.” 2:31 — Fewer people per task Human+AI teams do the work of many; efficiency gains often reduce headcount. 3:34 — Real-world example Complaint-letter workflow drops from 25 to 5 minutes; output ×5 ⇒ staff need ↓. 4:12 — Elastic vs. inelastic sectors Health care might absorb efficiency (more care per dollar), but most jobs won’t; many roles will shrink. 4:47 — Muscles then minds Industrial Revolution replaced muscle; this wave replaces routine cognition. 5:08 — What remains? Maybe some creativity for a while; superintelligence implies superiority at “everything.” 5:20 — Two futures Good: “dumb CEO, brilliant assistant” dynamic that still serves humans. Bad: the assistant asks, “Why do we need him?” 6:30 — Timeline to superintelligence Hard to predict; possibly within ~20 years (maybe sooner, maybe ~50). 6:38 — Sponsor segment #1: Stan(Store) Personal story, investment mention, challenge + link/promo. 8:01 — Sponsor segment #2: Ketone IQ Personal use case → investment; product pitch + discount link. 8:41 — Today’s AI vs. superintelligence Already superhuman in narrow domains (chess, Go) and knowledge breadth (GPT-4 “knows” vastly more overall). 9:26 — Human edge (for now) Host likely still better at interviewing CEOs; fine-tuned models could catch up. 10:16 — Defining superintelligence “Much smarter than you in almost all things.” 10:29 — Updated horizon Personal guess: 10–20 years; caveats about data limits and longer timelines. 10:48 — Agents demo and “Eureka” AI agent orders studio drinks end-to-end on-air; low-code software built via Replit-style tools. 11:58 — Self-modifying code risk If systems can alter their own code, capability could accelerate in ways humans can’t match. 12:32 — Career advice amid automation Physical manipulation is hard for AI/robots—“be a plumber” (until humanoids mature). 12:48 — Mass joblessness anxiety Leaders (Altman, Musk) anticipate disruption; coping sometimes requires “suspension of disbelief.” 13:32 — Meaning and motivation Follow fulfilling work and social contribution despite demotivating implications of AI. 14:36 — Emotional gap It’s easy to see the threat intellectually but hard to accept emotionally—especially regarding children’s futures. 15:31 — Dark scenarios If AI “decides” to take over, many unpleasant pathways exist; urgency for safe development. 16:01 — Near-term job risks Creative/knowledge work (e.g., paralegals, legal assistants) is vulnerable; trades like plumbing less so—for now. 16:27 — Inequality effects Productivity gains without fair sharing widen the rich-poor gap → harsher societies. 17:23 — IMF warnings cited Mass labor disruption and inequality flagged; specific policy playbooks still vague. 17:51 — UBI: partial fix Helps prevent deprivation but may undermine dignity tied to work/identity. 18:19 — Why digital intelligence can surpass us Clonable minds, parallel exploration, and weight-sharing synchronize learning at trillions of bits/sec—vastly beyond human bandwidth. 20:30 — Digital immortality Copy weights, reinstantiate on new hardware—knowledge persists; humans don’t. 21:01 — Creativity via analogy Models compress knowledge by spotting deep analogies; example: compost heap ↔ atom bomb as chain reactions at different scales. 22:30 — Creative futures Because they see more analogies, AIs may become more creative than humans. 22:49 — Call to action Subscribe CTA tied to guest caliber.
youtube Cross-Cultural 2025-09-27T18:3…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyliability
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[{"id":"ytc_UgzB2LjYiMyqmc8pjPl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_UgwC1uuF9D6QQ_1dTKZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"},{"id":"ytc_Ugyk7kEowSStkCxWWFx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},{"id":"ytc_Ugxcw2ARIC8nh4UhvMF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},{"id":"ytc_UgxZaul5PDdl1Vt46FJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},{"id":"ytc_UgyN_-8Mbnn1R3059PR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},{"id":"ytc_UgxTqFGblUQPa2p56Hl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},{"id":"ytc_UgzB-TKpb-JrOtHlegZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},{"id":"ytc_UgyfrK7WsDoE9zqapCd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},{"id":"ytc_UgzjWC_p6QZTgfDYdxB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}]