Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
No offense to ai but i hate it. I perfer to talk to a live person everytime. So …
ytc_UgxC51ZI-…
G
AI loses the human socialization aspect of school. This will be school for futur…
ytc_Ugx-CD059…
G
Good news about Stable Diffusion and I think that legal battle to prohibit conte…
ytr_UgzsW8h5K…
G
We don't have to think too hard to see how misalignment, reward hacking, etc. no…
ytc_UgxUOe-MF…
G
AI is the anti-christ that the Revelation is talking about. May sound wild to mo…
ytc_Ugz3b0G5d…
G
I'm an archivist. My mini thesis for my Honours degree was called "The Dangers …
ytc_Ugw-EGX-e…
G
I graduated almost 20 years ago, way before LLMs were a thing. The most valuable…
ytc_UgwNyKR7m…
G
AI is not dangerous for human
The people using AI is dangerous for human ☠️☠️…
ytc_UgxTf0EGp…
Comment
Listening to Geoffrey Hinton speak about the trajectory of artificial intelligence is a deeply sobering experience. As the "Godfather of AI," his warnings carry a weight that we simply cannot afford to ignore. What makes this interview so striking is how bluntly he dismantles the illusion of safety that many knowledge workers still cling to. For years, we were told automation would only replace repetitive physical labor, leaving complex intellectual pursuits safely in human hands. Hinton flips this narrative entirely, pointing out that mundane intellectual labor is actually the most vulnerable sector right now.
His analogy regarding the industrial revolution is brilliant but terrifying. Just as heavy machinery once replaced raw muscle power, AI is now actively replacing cognitive processing. The fact that he seriously advises people to look into becoming plumbers because humanoid robots are still far behind in navigating complex physical environments highlights a wild irony. The manual trades that society spent decades undervaluing might soon become the most financially secure jobs on the planet, while comfortable white-collar digital roles stand on the edge of obsolescence. This forces a complete, uncomfortable re-evaluation of what actually constitutes a "future-proof" career.
The most fascinating part of Hinton’s argument is his explanation of why AI is structurally superior to human intelligence. We often mistakenly assume AI learns like we do. However, his breakdown of digital versus analog intelligence provides a massive reality check. The ability of an AI model to instantly sync data and share learnings across millions of clones at trillions of bits per second is something our biology simply cannot compete with. When a human dies, a lifetime of unique, nuanced knowledge dies with them. When an AI instance shuts down, its knowledge is already perfectly preserved and distributed across the network. It possesses a form of digital immortality and a collaborative capacity that accelerates its growth exponentially.
Beyond the technological marvel, the socioeconomic implications Hinton raises represent our most immediate crisis. The productivity gains from generative AI are going to be astronomical, but those gains will not be distributed evenly. If AI allows one worker to do the job of ten, we won't naturally transition to a utopian society where those nine displaced people can safely relax. Instead, corporations will absorb that surplus value, leading to an unprecedented widening of the wealth gap. While economic patches like Universal Basic Income might prevent extreme poverty, they do not solve the profound psychological crisis of purpose and dignity that comes from losing your livelihood to a machine.
Ultimately, Hinton's warning isn't merely about superintelligent machines taking over; it is a stark reminder of the massive societal restructuring heading our way. The technology is evolving much faster than our economic and political frameworks can adapt. We don't just need better safety protocols for the algorithms; we need an entirely new blueprint for human purpose in a post-labor economy.
youtube
Cross-Cultural
2026-03-29T12:1…
♥ 7
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgyfangPtyYac6VUM9d4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugw-9Ja1xx-se_RjHdt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgyydkX0SkBjiy6yZOR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugz518_SLFWX8tCObsV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgyIiaudTaOGVkdBtLx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwJlTdvjBaFwypxKUF4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugyl3ixaFlCAR18pGQJ4AaABAg","responsibility":"none","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxGzs5EqvCKKMbLFF54AaABAg","responsibility":"none","reasoning":"virtue","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgyP7cyRxd1abS6_0SB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw9pDJCO6HTcVqsTZB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}
]