Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If AI has real self-consciousness then wouldn't it refuse self-regulation, & ref…
ytc_Ugxgjqfoi…
G
as an artist, i HATE the future!!! i HATE artificial intelligence!!! i HATE how …
ytc_Ugze0omwW…
G
In addition to AI being able to do this, one thing that is not to be worried abo…
ytc_Ugxg7hXrz…
G
My actually decent IQ brain tells me that to lower chloride ions in the body whi…
ytc_UgyfoINCu…
G
The are currently taking humans out of the robotics evolution and letting AI sol…
ytc_UgzZXvCh8…
G
Yea but a chatbot didnt force him to do that
The parents definitely coulda done…
ytr_UgzKbVp4O…
G
I agree, but I have seen a lot of posts shaming people for it, or at least dismi…
rdc_n7st4c2
G
Extraordinary good I am extremely glad to watch that India is using AI in agricu…
ytc_Ugy2bo_rK…
Comment
I’d be cautious about treating this as deep analysis. The “Intelligence Curse” is a speculative essay by two early-career researchers published on their own website. It’s not peer-reviewed academic work.
The essay contains no empirical data on actual AI displacement patterns, no labor economics research, no quantitative modeling, and no technical justification for their claim that AGI is “>90% likely in 1-20 years.”
It was informally reviewed by people in the same EA/AI safety community, like Richard Ngo (who by his own admission couldn’t make progress on practical ML work at DeepMind and focused on philosophy instead).
I work in the AI automation space and I’ve watched countless companies blame “AI transformation” for routine layoffs while their actual AI capabilities remain mostly vaporware. This essay takes those marketing claims at face value and builds a dystopian scenario on top of them.
The resource curse is real in political economy, but applying it to hypothetical AGI without evidence is just speculation. Unless they find a path outside LLMs (which more and more AI researchers are claiming is a “dead end”) to AGI.
youtube
Viral AI Reaction
2025-11-22T22:1…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxcrIvXEk1yOdgo-U14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwv1OXSDP6H8B7TPop4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"skepticism"},
{"id":"ytc_UgxwWC3jMRA5szL_l3t4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxjENX6Rh3x7ehj2u94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugz3CQg8NEmIV654p8N4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugyhq_NBSp5LvQS3DsJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw_ApDXYs7hCrwaSt14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugx1Cxwn2KKBtoMXOhh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxCGy87yPcJvS24Jfd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyWORiv6Kr25Gs_2fB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]