Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
"But disabled people need AI to make art!" bro there are people with no arms who…
ytc_UgzDaYYc2…
G
I like how this shirt uses logic to explain the human emotion behind art and why…
ytc_Ugw4vdLoU…
G
this is like the Fake AI App used on those celebrities.
this is terrible, becau…
ytc_UgwLVZ_BL…
G
I'm not scared in the least. I think AI and the singularity are the only hope fo…
ytc_UggDitZp2…
G
Thanks for the comment, @Betsnipername! AI getting too real? Well, just imagine …
ytr_UgwcD7i5I…
G
If AI decided to pull the plug on this ridiculous mess who could blame it?…
ytc_UgyC-ZaiZ…
G
Thank you for sharing your thoughts on AI and its potential impact on our data a…
ytr_UgyvVT8Gk…
G
You sound exactly like every plebe who cries that "new medium" is invalid just b…
ytc_Ugx1EJAX-…
Comment
The real AI apocalypse isn't a robot uprising; it's a silent takeover of reality itself. And it's being funded by passive capital.
The greatest danger is the convergence of three forces:
1. AI that can generate infinite, persuasive lies.
2. Big Tech that owns and controls these models.
3. BlackRock and Vanguard as the massive, common shareholders in nearly all of them.
This isn't a competitive market; it's a coordinated monopoly. The same few asset managers, pushing for the same relentless growth and adoption, hold majority stakes in Microsoft (OpenAI), Google (DeepMind), Amazon, and Meta. Their incentive isn't a healthy digital ecosystem; it's quarterly returns.
They are inadvertently funding the erosion of truth, the enshittification of the web, and the devaluation of human creativity—all to maximize value in a portfolio they all share.
The future of truth and society shouldn't be dictated by a shared boardroom consensus.
youtube
AI Governance
2025-09-07T19:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzAzCBUtT5rghaChDV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugz17VXqCVbI0zmm6XJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwiOLbIHTuq0mMf2lt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugz0RNPDd1R13fx2uTR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxH54xdmunBtUyYhFl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugw8RIyEDlyIIMXj4W14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugw_Ffv5Lm0Ebv2ZMMx4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugw7XqW9ziUIph-kLFd4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzUyDQhNj1QZQqg8WV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzpQ87mrPYWWC_79H94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}
]