Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If ai can organize my computer files, I will be happy. I don’t need it to make i…
ytc_Ugzkn4_NU…
G
As of last month, all of the minor accidents were the fault of human drivers and…
rdc_crxh0gt
G
Both Capitalism and Socialism will make AI dangerous. We need a Distributist Rev…
ytc_UgxJoOYSx…
G
This topic resonates with me. My daughter draws a lot and she has a decent follo…
ytc_Ugx4MdPJK…
G
I mean.... i just would of reached up there and hit the power button.... but tha…
ytc_UgxH-TwRW…
G
Who's gonna have money to buy to those enterprises that already switched humans …
ytc_UgyHle1uk…
G
1:27:00 - if job displacement takes place but AI absorbs the workload and govern…
ytc_Ugx5DkA9n…
G
I also just don't feel proud of art generated through AI. No matter how much pro…
ytc_UgymOJJpa…
Comment
What often gets overlooked is that the current way of making AI smarter, mainly by making models bigger and using more data, is starting to hit real limits. There are diminishing returns, meaning it takes much more computing power for smaller improvements. There is also a limit to how much useful data exists, and models can only learn what is in that data. On top of that, the cost of training and running these models is extremely high in terms of money, energy, and hardware. There’s way more to it than can be explained in a short comment, but the main point is that it’s not just about scaling anymore. Without new ideas or breakthroughs, this approach is already reaching its limits.
youtube
AI Jobs
2026-03-19T09:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwklMc5OK6ZqTQ3TYB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyLKnbKf5L4EYqXnpx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugyde2R6sAApOxU5wyd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzY8gC7482Q-rycBKZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyNipVgG_XGDxuSLk94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"outrage"},
{"id":"ytc_UgyVNcQdnuzXCBIatBx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzjs5gERbtWDQzuRRN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxW_jykRum_dYt-2-54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugz3MazuvOIfVeQQo8t4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzE31MRlo86wdb33RZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"resignation"}
]