Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
As WE HAVE ALWAYS DONE, HUMANS WILL ADAPT AND CHANGE. The horse and buggy adap…
ytc_Ugykixv21…
G
@mycommentmyopinion how is an AI ever going to figure out what anything is unles…
ytr_UgysBC-kg…
G
@Cashbans More like just call them AI image generators. "Artists" should be out …
ytr_UgzY166Ee…
G
Technology advancement is constantly accelerating. I'm not worried about robotic…
ytc_UgxQ79lSp…
G
Physical art takes an hour, digital art takes 45 minutes, Ai takes 10 seconds. T…
ytc_UgyWrwwyY…
G
Here is what AI (Gemini) has to say (and it's neither interesting or intelligent…
ytc_UgzpQdY5i…
G
As a person that has been on character ai apps for 2 months his is true…
ytc_Ugylk2Zth…
G
AI, esp in coding and art - is completely derivative. most AI coding probably ju…
ytc_Ugy9lpBoO…
Comment
On Dec3 2025, a paper was released revealing the origins of hallucination. Skipping all the complicated talk most of y'all won't grasp, it was found that it is inherent to the architecture in which pre-training happens for an AI model.
Let that sink in. We've all understood since every competitor hasn't fixed it yet, and it looks like unless they remake a completely different product, it will always be what it is. Larger amounts of token limits seem to midigate the ability of tripping over those H-Neurons in a sense that it's less likely to come across them, but ultimately the more complicated the ask the greater chance it will run across these specific neurons. Getting rid of them all together destroys and degrades response quality.
AI being stupid isn't something these company can fix without a revolutionary leap in architecture defining what an LLM is. So "ai" is barely a piece, they are pedalling as a product that will be AGI (it can't).
youtube
Viral AI Reaction
2026-03-07T00:4…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugzh2oucrlvq9_L0PV14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugw6FL4aHNPyAckl0-V4AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyUS0poGW5BLXm6fNB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwClci4eJ1gjUCY4Tp4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgyYlw68PPwRWfC3w7d4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwF_LiHUgvA9x-7kHp4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwE_VNNHwoszhfFugd4AaABAg","responsibility":"user","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugypnym0azk3YFznoV94AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"unclear"},
{"id":"ytc_UgyBmGBriT7yhFsSI7d4AaABAg","responsibility":"user","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgyBf_L91LtGLrVU1OR4AaABAg","responsibility":"unclear","reasoning":"virtue","policy":"unclear","emotion":"approval"}
]