Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Yeah but come on now, everyone knows that this is not real photage from Hungary.…
ytc_UgwadV1z2…
G
AI makes soo many mistakes!! AI war drones have killed multiple people by accid…
ytc_UgxlbqcD0…
G
I'm at a stage with AI where I'm thinking it's got a higher floor than 'internet…
rdc_mtne499
G
So this facial recognition software only work mostly black people who is surpris…
ytc_UgySec8Kh…
G
There should be a new department of AI and Secretary of AI to start with.…
ytc_UgyyQ40jS…
G
bro just said i am alive 💀bot he is a robot he was never alive…
ytc_UgwbylVp1…
G
Can someone say on loud whats forcing the companies to get rid of employees in f…
ytc_UgzCfHM1y…
G
Mankind is famous for not acting until it’s on the brink of something it knew wa…
ytc_UgxIa7rME…
Comment
Fascinating episode. I’ve had many of Europe’s leading AI experts as guests on my podcast, and something interesting keeps repeating: their opinions often vary widely, but beneath all the nuance you can still sense a quiet undercurrent of fear. Not panic. More the kind of fear you feel when standing at the edge of a breakthrough that could reshape global power.
What struck me most in this conversation with Stuart Russell is exactly that tension: the idea that if one nation reaches a decisive AGI breakthrough first, it might fundamentally control the balance of power—economically, politically, maybe even existentially. His “Gorilla Problem” analogy hits hard: intelligence alone decided the fate of an entire species. We tend to forget that.
The parts about extinction risk and CEOs casually estimating a ~25% chance of AI wiping out humanity… that’s wild. Imagine any other industry being comfortable with those odds. It really does feel like we’re playing Russian Roulette at a civilizational level while racing toward a trillion-dollar prize.
youtube
AI Governance
2025-12-04T13:1…
♥ 11
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_Ugx-aKGf0p_hQuyTf2d4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxFQKAPKOWDySBnKWt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwWc39WtXyHMVG4LQp4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugy5Q3Lq3kuLnHEU_FF4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugw9l5dOh12lHIs1VyZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgxOusnwRX4E_TVRt7B4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxH7sOhjr5gdxAWi6x4AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"resignation"},
{"id":"ytc_Ugw1ba_YmlEEUCc05Bd4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzUiGiMmCWqK-W9MnJ4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugz9RRUs9Alm9ExipbV4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"resignation"}]