Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Great episode, genuinely one of the better platforms unpacking the real infrastructure and second-order impacts of an AGI era. Really appreciate the quality of the discussion. One point that’s still largely missing: almost no one is treating growth rate itself as the primary controllable risk vector. In every other high-consequence industry I work in as a principal designer/AI consultant, growth rate is the lever. Even if individual systems get safer, accelerating deployment mathematically increases aggregate risk unless the growth rate is controlled. Nuclear reactors are throttled during uncertainty. AI, by contrast, is accelerated during uncertainty. There is a practical, engineering-led way to manage this using quantified risk budgets (1-in-a-million style targets), staged capability gating, and controlled ramp-up, models already proven in nuclear and aviation. Whoever among the frontier AI labs genuinely adopts this first doesn’t just solve a safety problem; they win the leadership position globally. You heard it here first! :) If useful, I’m happy to share a short, high-level position paper outlining the approach. Thanks again; these conversations really matter.
youtube AI Governance 2025-12-19T15:2… ♥ 2
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyWXIolfHVH8DlJGAJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgzvXv6kT29yjbwQNyZ4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzSR5Sn8v96bLJ9hyZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgycBgNgosKDtxVq7rx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwIO4V10hvtm0DbRed4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgySBxXJ1TCquAZixpp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugwtf-V1cawnd3ME2Dt4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugw-gd1vY9SHit3A29l4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugwoh_S9oM1yo_kqs0Z4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugy7CmLokiOqFI_A4qp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"} ]