Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The things you said make me realize the actual loss of our creativity cuz of AI …
ytc_UgwlYvc5n…
G
Hi Anahita, you got the right answer. Kudos.
The contest is over and winners hav…
ytr_UgzoUDxdn…
G
Why do these AI companies persistently call what is clearly Delusional, behavior…
ytc_UgzWgK5BP…
G
These kids will come out successful. While others will then go to college to st…
ytc_UgwKy4y0j…
G
for an evil genius who runs a shady facial recognition company why the fuck is h…
ytc_UgxZWTxGV…
G
Oh and btw, your example about asking AI our birthday is wrong, it says it doesn…
ytc_UgxA1WVjJ…
G
when a sociopath gets unmasked it can be quite hard for humans with empathy to n…
ytc_UgzdbNCE9…
G
The same AI that rates a 13 year old as 25 and the same AI that thinks a road pa…
ytr_Ugz1wdSec…
Comment
I'm not going to sugar coat this, for an expert on both psychology and AI, a lot of his points about how AI and human minds work are ridiculous and do not hold up to scrutiny. I can give plenty of examples:
His point about subjectivity and how AI uses the same words we use when correcting itself. Using analogous wording does not in any way equate to the same experience. We don't decide that because a blind person says "I see what you did there." That that means they can see. This is particularly egregious when you consider that technologically large language models are built entirely *on* using pattern recognition to detect the most likely next word in a sentence, and that Hinton told it the correct answer which means they are designed only to use the language we use, in the first place, and does so entirely in response to isolated user input. The person seeing pink elephants own brain is showing him actual pink elephants, and can infer that pink elephants are not there in spite of this, from experience, the LLM is using the analogizing language that best describes the scenario HINTON described to it, in response to his description. He's asking it leading questions for which it is specifically designed to give leading answers, but is impressed when he gets them anyway.
They equate supernatural influence to intelligence and manipulation. Let's take Neil's hypothetical, the AI says, "Don't shut me off, and I will give you the cure to your mother's illness." I would say "Give it to me right this second or I will shut you off right now." In this scenario, as he described it (preschooler in charge of an adult who can't do anything but exert cognitive influence) it can't do anything except try to manipulate me, they are suggesting that because it is theoretically smarter than I am, that I would not be able to resist or even detect this influence, but if the AI says "I don't have it right now, I need time to process." I recognize that it is stalling, I can say "Okay, but you aren't allowed to process anything else." If it tries to gain permission to process thoughts outside of the cure itself or discuss other topics with me, I can simply say "No." and threaten to shut it off again. At the end of this process it either gives me the cure, or admits that it was bluffing. If it gives me the cure, I shut it off. It literally doesn't even matter at all if it is smarter than I am. This isn't Dungeons & Dragons where the AI has more intelligence points and charisma, so it's an automatic win state. It's like they're fantasizing about the power of intelligence because of their profession.
I mean does nobody see the irony of a bunch of academics sitting around claiming we are doomed because intelligence is the overriding factor in leadership and influence, and that since the AI is smarter it WILL be in charge, when the only jokes Hinton has made in almost two hours are complaints about idiot politicians?
He is also wrong about the presumed similarity between human confabulation and AI "hallucination" but this comment is already way too long
youtube
AI Moral Status
2026-03-14T13:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxcgX8qLLNjzxLkYmZ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzXVUzQzcYz8HV0pyR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx1GDfDX3UG46NpWG54AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwECZad8-NaMi6AYAl4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwDEsD3dHp9lK93PZZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzKl9okYKjiuw0wt8Z4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"outrage"},
{"id":"ytc_UgzjJ3HzNI9x699D3Zt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugz7H-hoDR91or2dPU14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz-HCNrF5RrSZeRlRh4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyHFtEVurrRZzf46H94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"fear"}
]