Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Woah, is this using the new voice mode Alex? I assume you haven't edited this to…
ytc_UgxQdyENZ…
G
The human mind is still, 100% better than AI., who needs AI? Only the dumb and l…
ytc_UgzVQG_9I…
G
The real problem is not who developes AI, but who will use it, and in which purp…
ytr_Ugya4kjAL…
G
My first thought was the eyes are all wrong.. but then I don’t want to say that …
ytc_UgxDjru35…
G
If AI wants to get rid of us, all it has to do is crash the global stock market …
ytc_UgyJFcq5H…
G
At 50:00 he makes a very important point about ai being able to write its own co…
ytc_UgwE95kMI…
G
"ORigInAlTy is DeAd aI iS THe FuTURe"CAVEMEN HAVE USED CHARCOAL AND ROCK POWDER,…
ytc_Ugzp84aE4…
G
They don't have to worry about artificial intelligence,
all they have to do is l…
ytc_UgzVmoHBQ…
Comment
How about a debate? There are three views, as I see it, about AI. Using the political nomenclature, on the left, you have the technology owners like Altman and Zuckerberg. They are running to get to AGI and superintelligence without interference. As Altman has said, there will be "turbulence". In other words, significant layoffs. In the center are people like the Dean of Journalism at Columbia University, who believes that we lose some jobs and gain some jobs in his field. As he states, "who knows?" The right are people like Geoffrey Hinton who want to put the brakes on the development of A.I. He believes that without safety and security oversight by governments and citizens, humans will be subservient to Superintelligence (a view similar to that adopted by Senator Bernie Sanders). Podcasters, like yourself, are doing one-offs that sound more like preaching rather than getting to the truth.
youtube
AI Governance
2025-12-10T00:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyIjXQ60ZYXE3HkfF14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzquxVYUk7KqaWaOo14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyuuRnCeEtJE3XwdKF4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugy_S38cJ9aHWNvyd1F4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwqUCs9MWGGf8O8uK54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugx9xd3Hjy9dH1vViwd4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwHAIyBUqddNhY6Kmp4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwTU4aGuIViQz2yIJd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzVhcv-Lk6ag_Q6Dfl4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_Ugwr1Uc3lZje9Ja0YF94AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}
]