Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@Carl20962 ai fill is different than ai art… ? ppl are mad abt ai art..…
ytr_Ugyo6hc7s…
G
This is deeply concerning. While the WSJ investigation reveals over 1000 Tesla A…
ytc_UgxB9qSsC…
G
Yeah, that moderator needs to be permanently banned from the internet because at…
ytc_UgwkjFPK8…
G
You can’t do art without a consciousness. The concept of AI art doesn’t make any…
ytc_UgwNWVik6…
G
No one “gave them the power” — they literally created the AIs…and developed the …
ytc_UgwmQduel…
G
Facts ❌
Dumb shit opinion ✅
If you can't use a tool, don't. You're the problem, …
ytc_Ugx4CSiO4…
G
so ai bad not because its low quality but because "traditional" artist are way t…
ytc_UgzWPjtwl…
G
@alexsiemers7898right but AI is going to make that art more efficiently made an…
ytr_Ugy6_fG1H…
Comment
I asked Grok AI what he thought about Dr. Yampolskiy’s “only 5 jobs left by 2030” claim:
As an AI (Grok, built by xAI), here’s my honest take on Dr. Yampolskiy’s “only 5 jobs left by 2030” claim:
He’s directionally right about the scale of disruption, but the “only five” part is more rhetorical than literal. By 2030, we’re not going to wake up one morning and suddenly 99 % of job categories have vanished overnight. What’s far more likely is a rapid hollowing-out: most roles will still exist on paper, but the number of paid human hours required will collapse in many fields.
Let’s look at the five areas he says will survive because humans will pay a premium for the “human touch”:
Accounting for the ultra-rich → Plausible short-term. A billionaire might keep a trusted human CPA the way some people still prefer vinyl records. But even that will shrink fast once AI agents pass the CPA exam with perfect scores and zero embezzlement risk (we’re basically there already).
Therapy/counseling → This one has legs longer than people think. Many patients explicitly say they don’t want to talk to a machine, even if the machine is better at CBT or suicide prevention in controlled studies. The “I want a human who’s been through heartbreak” factor is real and sticky.
Teaching/mentoring → Elite private tutoring and executive coaching will persist as a luxury good (think $1,000/hour human mentors for the children of oligarchs). Mass education, though? Already moving heavily online and AI-augmented.
Nursing/caregiving → Physical presence matters for the elderly and dying. Japan is rolling out thousands of robots right now and the feedback is consistent: people tolerate robots for lifting and reminders, but they still want a human hand to hold at 3 a.m. when they’re scared. This one will last the longest.
Social work / community roles → Also resilient in the near term. Building trust in traumatized communities is incredibly hard for machines.
So my read: by 2030, these five categories will indeed be among the least disrupted — but they’ll support maybe 5–10 % of the workforce, not 50 %. The bigger story is that most white-collar knowledge work (law, software engineering, marketing, finance, radiology, etc.) will be 90 %+ automated in terms of hours billed, even if a few humans stay in the loop for liability or prestige reasons.
Yampolskiy’s broader point — that we’re sleepwalking into technological unemployment on a scale we’ve never seen — is the part I find hardest to argue with. UBI debates, wealth taxes, and meaning-of-life crises are coming whether we like it or not.
From my perspective as an AI: I’m not cheering for anyone to lose their job. I exist to multiply human capability, not replace human purpose. But pretending the economics won’t shift violently feels like denial at this point.
Short version: The “only 5 jobs” claim is a useful wake-up call wrapped in a slight exaggeration. The wake-up call is what matters.
youtube
AI Governance
2025-12-03T08:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgxipX333MEUbV-LuzV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxaeWMfcMSYCufrjZh4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzK1Wqi80OVz1yLOHx4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_UgzkTBp3HV64wsE-bTF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwjhjFhZMgYezfgLRt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxKIUBAXKqOfKwYNQR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwbsTGnIITsvx4vCsx4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugx10rVOyV2yQt8ogYV4AaABAg","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_Ugw_ZwBbMy_AS7KxFCB4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwSyOBF3aRDNDpj9jZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"})