Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI isn't hidding anything. I is built to make money for big Tech companies. It c…
ytc_UgxA4Ucq5…
G
Hundreds of people die every day from car accidents and nobody bats an eye but i…
rdc_ecz80e7
G
Ah yes… the ol’ “I STOLE IT FIRST, SO IT’S MINE!!!”… to all Ai users, GET YOUR C…
ytc_UgyY0TLSa…
G
🤣🤣 this conversation deletes all fears I had about AI😂 What a couple of idiot AI…
ytc_UgwYrYz7-…
G
"Apple has told the BBC that it does not use facial recognition technology in it…
ytc_Ugyn6nI-b…
G
This only works with kids that never struggled with school to begin with.
Let's …
ytc_UgzrGFbUC…
G
Max Tegmark is not an A.I. Expert. He a physicist with an opinion like everyone …
ytc_UgzKjtVTx…
G
@yrurgrhhr Idk what do you want to say specifically ? Like as an ARTIST in an A…
ytr_Ugyc21grE…
Comment
Aight, let me say this with all due respect to Geoffrey Hinton — the so-called “Godfather of AI” — but some of this interview had me nodding my head like “yep, that’s facts,” while other parts had me squinting like someone just handed me a Norwegian tax letter in Old Norse.
Let’s talk about it.
1. First off — the man is right about the misuse of AI.
Cyber-attacks?
Political manipulation?
Deepfakes that look more real than half the influencers on Instagram?
Yeah, that’s happening TODAY. Not tomorrow, not 2030 — now.
Nobody’s disputing that. AI has already turned the internet into the world’s most chaotic group chat.
And he’s right that corporations ain’t slowing a damn thing down.
Regulation moves like the Posten Norge van in a snowstorm — slow, confused, and probably stuck on a hill.
2. But when Hinton starts talking “AI will have emotions, consciousness, subjective experience” like it’s Wall-E with anxiety? Bro… stop.
That’s where my eyebrow hit maximum altitude.
Man talking like ChatGPT is at home journaling about its childhood trauma.
No, sir.
Today’s AIs have:
No body
No hormones
No survival instinct
No self
No childhood
No community
You can’t have consciousness if you ain’t never had a headache, a heartbreak, or a heated argument with customer support. Period.
What he’s describing is sci-fi philosophy with a sprinkling of “I’ve been thinking too much in my office.”
3. The bioweapon panic?
Relax.
AI can help design proteins, yes — but it can’t walk into a lab, put on gloves, and whip up a world-ending virus like it’s making ramen noodles.
You’d still need:
a BSL-3 lab
equipment
biological expertise
money
time
and a complete lack of self-preservation
Saying “one guy could end the world with AI and a laptop” is Hollywood talk, not science.
4. The job apocalypse?
Listen:
A lot of jobs ARE in danger — especially white-collar ones.
But the idea that AI is gonna take EVERY job?
That every human will be unemployed, purposeless, and depressed?
Nah.
Humans have adapted through every technological shift since we stopped chasing mammoths.
New industries always rise out of the ashes.
It won’t be painless, but it won’t be extinction-level misery either.
And Hinton saying “become a plumber” like he’s giving stock tips?
Bruh, nobody is dropping 200,000 NOK on VGS vocational training just because a chatbot can write emails.
5. The superintelligence doom scenario?
This is where he loses me completely.
The logic goes like this:
“AI is smarter than us.”
“It will get even smarter.”
“And then… I dunno… it might just decide to kill us all.”
That’s not logic — that’s vibes mixed with anxiety.
AI doesn’t “want” anything.
It has no motivation, no fear, no ego, no hunger, no instinct for survival or power.
For something to “take over,” it needs:
agency
goals
self-interest
continuity
embodiment in the physical world
LLMs today have NONE of that.
So the chicken analogy? Cute, but irrelevant.
6. The part that was accurate, though?
The competition between nations.
THIS is the real threat.
Because AI doesn’t kill people.
People using AI — or racing to control AI — kill people.
When governments start using AI for:
autonomous weapons
psychological warfare
cyberattacks
political manipulation
…THAT’S the danger.
The problem isn’t the machine.
It’s the humans with the machine.
Same as it ever was.
7. Final Verdict:
Hinton gave:
70% real warnings we should take seriously
20% philosophical speculation dressed like science
10% pure dystopian “I watched too much Black Mirror” energy
But here’s the part he didn’t say:
The biggest risk in AI isn’t that it becomes smarter than humans —
It’s that humans stay dumber than the machines they build.
If governments keep chasing profit,
If corporations keep chasing scale,
If we let the algorithms shape society instead of society shaping the algorithms…
Yeah. Then we got a problem.
AI isn’t the danger.
Lack of accountability is.
And that’s on us — not the machines.
youtube
AI Governance
2025-11-29T08:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugzi8mobIusL1xVy7iB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy0qkSDhJQTMCNntvV4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyMt1_UPkwrvpmid_d4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgylKT0vkaS_LV-rDeJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugywduc7xTv64A1jOrN4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyvC5tXaCFYs5voPj94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugx3mTha2e9UdzQ4B5B4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_UgwkFYrpD30GV4N8FYd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugy1mx6U5IFbAe04QyB4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx7q1PmQZCIjpEUcbx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]