Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
19:30 There are actually a few sci-fi authors who have tackled the topic of adva…
ytc_Ugx8BsTLC…
G
Ai wont replace human creatives... but you'll find 3d artists who use ai will ta…
ytc_UgzhUcA_w…
G
As an up-and-coming Software Engineer, this makes me happy. I started my Bachelo…
ytc_Ugx8B_MtA…
G
We cant even agree that nuclear bombs are a bad thing. Good luck getting a tin p…
ytc_UgwamWpQm…
G
What would be the consequence to a culture that has written programs for ARTIFIC…
ytc_Ugz5Dj0HN…
G
Well this is clearly biased towards how management see AI and not how it sits in…
ytc_UgyuCxpC6…
G
Thank you for sharing your perspective. On the AITube channel, we focus on explo…
ytr_UgxO1GBnF…
G
If you're fine with being forever stuck in 2023 when it comes to art, sure, sinc…
ytr_Ugys0bVW9…
Comment
This is the stupidest way to ask this question. The AI hype machine went full blast as "scale" (and therefore massive investment) became the logic in silicon valley. They needed to justify the 2 trillion in investment in data centres - hence, the "be terrified of my super powerful invention" narrative which gripped media reporting on "AI" in recent years. But even they are now admitting that LLMs and other current AI models will never be "super intelligences" (AGI or ASI). At least not in the way that the question being debated here presumes. Nevertheless, "public intellectuals" like Harari, Zizek and Fry continue to flog their opinions all over the internet and at any event that will host them. They are discussing something they seem to fundamentally misunderstand and hypothesising about a future that is incredibly unlikely.
youtube
AI Governance
2025-07-21T06:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwFZwLtT1p2eGKtEON4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugzmg3Eb2I3PZAeON394AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugy3Xe-Zvhu2OJoXHex4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy7YAZ2pX0O2Suh6mt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxhvcsItIdMOxRO-Jt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugz3GdOhDzXwHZQ5TSp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwTM7EwXsmg0AjdfYN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxQ_gKuIf-KUriojwF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzVOv8x8DbaRfHV6iZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwUFP2e04fE-zGe_x54AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"}
]