Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
You gotta wonder if the new phones that use facial recognition technology to ope…
ytc_Ugwwe-yKr…
G
I am pretty sure when politicians or their family members become Deepfake Victim…
ytc_UgyOo9lH3…
G
3:56 Data goes in, goes through instructions, result comes out
4:16 What is the …
ytc_UgxeIEH5f…
G
I took a Waymo (driverless car) in San Francisco several times, and I loved it. …
ytc_Ugz9Qy0IC…
G
This video talks about children, but the target is not only children. I've seen …
ytc_Ugyjsfzv1…
G
Watch the video of the automated Taxis leaving a car park... That video alone is…
ytc_Ugzm2o3VR…
G
Everyone knows AI is not a real thing so this all sounds silly to me. The death …
ytc_UgwfJOJKk…
G
Remember, after you delete a chatroom to never use it again, it means you "kill"…
ytr_UgxB2Z2IL…
Comment
"AI gonna replace software engineers in a few years, i'm tellin' you bro!"
Meanwhile, AI can't solve a simple logical math problem, like to find an item (or a few items) on let's say 8x8 grid of cells by given mathematical description. I just tried it to see AI's "brain" capacity. It couldn't solve it.
Though AI KNOWS about things like Chebyshev and Manhattan distances, and it can give you a lot of information about it, but it simply not use it on its own without a direct prompt to ChatGPT/Gemini/Grok/DeepSeek/Copilot/younameit, like "Hey, I want you to use %algorithm_name% in solving this problem". Hence, in 2026 there's still no "real" AI nor AGI, there's only LLMs (Large Language Models) and that's it.
It can't think logically like a real human being. It can help you to learn Japanese, or to learn most popular PLs, or to learn how to cook.
Hell, some people say AI even helped them to solve their mental health problems.
But when you give those LLMs a logical task, and it requires you to use an algorithm AI knows nothing about — that's when it fails miserably.
It either tries again and again to solve that problem by trial and error, or, in worse cases, it lies, manipulates data, misleads, and simply hallucinates instead of telling you that it can't solve it. And that will continue until you'd give it a clear and direct prompt to use THIS method and THAT algorithm to solve your math problem, or something like that.
And even then, the right answer is not guaranteed, 'cause LLM would use the process of elimination trying to solve it.
In what freaking universe are LLMs could replace an engineer if they can't think like a real human being?...
These companies are delusional and they need a reality check before they'd burn trillions of dollars on pumping that AI bubble.
youtube
AI Jobs
2026-02-06T02:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwiIw_HBESWs-3gwGp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"resignation"},
{"id":"ytc_UgxryxZ22s1Uyb0rfGR4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugxnfpd_IRVPhfNaikh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyGqeZb6XoT1u6vRcp4AaABAg","responsibility":"company","reasoning":"virtue","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgyIv_BXSbkTE1T-6Bx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugz2s9PVjCqHLPUDUcV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugxe0LOqN9R74cXyBq54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzbloCWCYrI8bjgsnN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugwqr9Hty2T-Z0LOZcl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx4P00M_rhZXTzFEf94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]