Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Yes. AI is going to replace a ton of stuff, the sooner you get over yourself the…
ytr_Ugz5iXoWv…
G
That piece was awful but some of the AI art is really great. The word “Art” or “…
ytc_Ugw0fXLqo…
G
The root of the word "robot" comes from robota-- Slavic for forced labor. Our ma…
ytc_Ugx6vDPoN…
G
Nah. I know coding, but still get AI to guide me through the planning stages, te…
ytr_UgyFisiVl…
G
AI aside, 10:20 im confused how this person thinks pokemon games are gambling? L…
ytc_UgwW2M27N…
G
It's my impression that people *believe* they can take a nap while these are on …
ytc_Ugycy61gU…
G
A big problem is that a lot of AI art online is "prompting", which not only look…
ytc_UgwINn7Hv…
G
AI is just a tool, like a pencil. The problem starts when people use it to gener…
ytc_UgwCatZ2j…
Comment
It's not yet clear to me that LLM based models can do a better job than humans. To this day, DeepSeek and ChatGPT struggle with basic mathematics concepts. The general rule thus far is that LLM should not be depended for any real impact decisions that require rationality and logic, and this is not human sentiment; this is coming from IT departments that should in theory be embracing the AI marketing/ideas.
All this will be true if LLM models achieve real sense of AGI, but thus far they are more used as a tool. Unless something fundamental changes, current LLM based AGI imitations will increasingly become a tool for human assets that can accelerate computing labor while workforce that does not require critical thinking and real decision making processes will be replaced. I don't think this will lead us to Cyberpunk dystopian horror, but it will increase the growing gap between the people with asset and people who do not.
youtube
Viral AI Reaction
2025-11-23T21:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxZZwp4nl3iMjLiFah4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"unclear"},
{"id":"ytc_UgwRGQ_2o9LwZWBgdqJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxlSOK4Esn4aznoxW94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxSfsNPF9wm-hFA16Z4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzvXrFaBdhUvKM03UF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgznVOTWp56KYjh9TbJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwWE1xeV2QLpbbV8994AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxRH_svzNVRZ79YDMx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyAgZ0lgWLgbjLFEgl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugw6gDSJW0pdoqxzqyp4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"}
]