Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Unfortunately, not only China. Japan, USA, German, you name it. Robot is killing…
ytr_Ugw1kWdvh…
G
She is great. Im getting the book now. Why cant more people see whats the most o…
ytc_UgxkAVWg_…
G
i miss when ai genersting images were just used to create memes with dall e…
ytc_UgxBFfhS6…
G
last 10s of interview is the real issue society will face with ai expansion - ha…
ytc_UgwTmFbqj…
G
What are all these analogies given by the Believer AI? None of them made any sen…
ytc_UgwLY5w-D…
G
Genuine question, but you mentioned how these were run through Nightshade in ord…
ytc_UgytbyjQe…
G
I struggle with hair a lot so that's really the only thing I might use AI for, I…
ytc_UgzO1kC2Q…
G
I saw a job posting the other day on LinkedIn and the job was posted 8 hours ago…
ytc_UgzxLoG_u…
Comment
Steven, while I appreciate you platforming Stuart Russell and his warnings about AGI existential risks, there's a glaring irony here.
You're both "winners" in the current system — you with your multi-million-pound podcast empire built on viral clips and hustle-culture branding, him with prestige, grants, and influence inside the very academic-tech complex racing toward superintelligence.
The people who profit most from lecturing the masses about impending doom are often the ones with the strongest incentives to keep the status quo intact. Real solutions (pausing the capability race, democratising development, fundamentally rethinking profit-driven AI) would threaten your relevance, your speaking fees, your views, and the entire attention economy you both thrive in.
Talking about human extinction while monetising the fear of it feels less like leadership and more like dinosaurs giving TED talks about the asteroid — alarming, eloquent, but ultimately invested in a world that’s already ending.
If we're serious about survival, maybe the first roles AI should replace aren't truck drivers... but professional doom narrators who get rich warning us without risking their place in the game.
youtube
AI Governance
2025-12-24T18:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | virtue |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugw0l_heyAXQ4x4FK-h4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzzdAWLM_iNJQNz6Qd4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgyH_dg1fGqLFDFVQTl4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwyctSnsREPJuVM0N94AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgzeI9Ir6B8I91SBAc54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_Ugymf4vN6oRrYuoUotx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyCM-AiwL-MnJNyXoR4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgzS8rM0MT_xe2t1KJV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwiJrC4XFYYkZopzmB4AaABAg","responsibility":"company","reasoning":"virtue","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyPzcOARjovWMmxcox4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]