Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
1:18:39 can we also talk about comparing one frame from a two hour long animatio…
ytc_UgyldxmzZ…
G
i think the only real acknologdeable thing about this is the skill to actually b…
ytc_Ugww4rcst…
G
Yeah well let me know when they start using AI to control traffic lights at inte…
ytc_UgzELiI72…
G
This is why I only use YT. Deleted all my SM a while back so hopefully that HELP…
ytc_Ugzji275C…
G
7.5 *BILLION* miles done by Full Self Driving!*
*All done with a human …
ytc_Ugy-vFQdw…
G
I really hate when people trying to pass an AI art as real to get a gotcha momen…
ytc_UgyD_3hwW…
G
it's not a case of them being smarter than us.
I believe it's more so AI being…
ytc_UgzfTHmzY…
G
@thewannabecritic7490🥀🥀🥀 ai won't replace artist... If people have fun making t…
ytr_UgxapSuD0…
Comment
NO, NO , NO... What people call “AGI” right now is mostly marketing. LLMs and “agents” are useful, but they are not general intelligence. LLMs scale with a clear problem: you burn vastly more compute for smaller gains. That diminishing return matters because it turns “just scale it” into a power and cost wall. A system that needs huge GPU farms to get marginal improvements is not on a clean path to human level general intelligence. And the “agent” layer doesn’t fix the core issue. Agents are task loops: call the model, check output, call tools, retry, patch failures, repeat. That can reduce hallucinations by adding filters and verification steps, but it’s still a brittle routine. It’s closer to automated workflow than a mind. Iterating until you get a coherent answer is not the same as understanding, learning, or reasoning robustly across new situations. So yes, LLMs have a scaling and efficiency problem, and agents are mostly a wrapper that compensates for weaknesses. That combination can produce impressive demos, but it’s not AGI.
youtube
2026-02-06T09:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxULa83FZ45v4baS4B4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxOMcz4ECaofmsxYRB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyAHji4ybUbrw9hApl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugyq-wZA5h8aqDzbkXB4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgybcsHqzXzMqgDQFIR4AaABAg","responsibility":"ai_itself","reasoning":"contractualist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxDEk5XLRAtwFS0dIV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxYrnJRGPMuJMah83x4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugw4SY4f03fOfKYHNhx4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyRQzKHHbaaOgtfcDR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"hope"},
{"id":"ytc_UgxYsTz43jL9j9D914F4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}
]