Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It’s the opposite. He believes he’s so important that even an AI version of hims…
rdc_oh7ry4i
G
As a class nerd who always did his homework on time and DIDN'T do such blasphemy…
rdc_mvvipvl
G
Well face recognition uses infrared and thermal graphing so it’s not perfect.
De…
ytr_Ugyhj78PW…
G
richest guy in the world cheaps out on the core technology of self driving, stil…
ytc_UgyOrgp6D…
G
Human level A.I takes a LOT of time, effort and money etc. There will be tons of…
ytc_UgybEkuIa…
G
That's why I hate AI artists they aren't really artists because they lack the ne…
ytc_UgweZAHzU…
G
If 99% of the population, definitlely the IT professionals, becomes unemployed, …
ytc_UgxXhGPuV…
G
Don't think I buy into that. "AI" is still a thing that needs a bunch of resour…
ytr_UgzKDyyJv…
Comment
Hinton says - we don't know how to deal with things smarter than us. But there are two assumptions he makes which can be contested. 1. Hinton stipulates that we really never dealt with things smarter than us. That's probably a false axiom. On average half the people we interact with are smarter than us, and many are much much smarter than us, yet billions of people survive and prosper. 2. Hinton stipulates that AI will be super-intelligent. First, he uses the term 'super-intelligent' without even defining what it means. 'Much smarter than you' is not a definition. Second, it may never happen - we need to know what artificial super intelligence (ASI) means in order to judge.
BTW - the concept comes from thinkers like I. J. Good (1965), who coined the term “intelligence explosion,” and later Nick Bostrom, who defined ASI as:
“An intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest.”
Consider this. We used to rely on the 'Turing Test' for many years to discard claims of achieved true artificial 'intelligence'. The test was roughly - Can a machine’s conversation be indistinguishable from that of a human? More broadly, now the idea is, generally speaking, that when we (humans) can no longer interact with an artificial system and figure out that it is not human, then it passed the 'artificial intelligence' test. It is indistinguishable. This was never achieved thus far with conversational interactions - not even with the most advanced chatbots. I repeat - THIS WAS NEVER ACHIEVED THUS FAR. And this is not even an AGI test (Artificial General Intelligence, “as smart as a human *across the board*”) because it is not tested across the board, only in conversation. So we have not even achieved AI status - artificial intelligence (Turing Test wise) - in conversation.
But what is Hinton's (or anyones') test for ASI, artificial super intelligence? As a scientist, Hinton must define what he means by artificial super-intelligence, and how we can test whether a system has passed the test. He doesn't describe the 'Hinton Test'. I am not aware of any such proposed test. I think he, and others using this term, ought to propose a test. Alas, for superintelligence, a Turing-style test breaks down, because a superintelligence could easily pretend to be dumber than it is and there’s no benchmark beyond human performance — we’d need a post-human standard. This entire discussion becomes philosophical, even theology 🙂
youtube
AI Governance
2025-11-04T04:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgzSJU9VFSs4GJUEhZB4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyB2Y98Hb4n2JoxG2x4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxusIlicnbzctB0eVR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzzcALAtO8WA3OR-8B4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxHLGTcQUoYCl2VnLB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugzard6Uo15s8ArnoCd4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugz4voCu4X9C7BtXwk54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgyHQHTjbdSHL-sdB-B4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugx0cOEjhlPd0PYICWp4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgykEr6us1fPF3i3Kol4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"ban","emotion":"fear"}
]