Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I used some AI generators recently and I think it's more promising as an art ref…
ytc_UgzMsulBm…
G
The robot looks so familiar!!! Look like a person!!
Oh i remembered it's the whi…
ytc_UgwmvHua4…
G
Ive been saying AI is talking to the ones in the Bible who said they would retur…
ytc_UgxRg5d6P…
G
Well, I'd say it has always been obvious that, when AI would've been invented, s…
ytc_Ugxpow7uG…
G
AI is dangerous! Imagen in a few years they can use your face
for anything...f…
ytc_UgzVxQDcC…
G
God creates minds, arrogant men don't. We're just easily deceived by these stati…
ytc_UgzHNo2WA…
G
LLM’s and large servers are training models that learn from the inside out; maki…
ytc_UgzykRQUZ…
G
“criticism over AI is racist” is a wild take for many reasons, but I would make …
ytc_UgwU_XqLU…
Comment
The short memory or window size you mention is not only cause of hallucinations, it is the intrinsic nature of LLM, here more about it:
The Probabilistic Guessing Game 🎲: Like the video mentioned earlier, LLMs are fundamentally predicting the most mathematically likely next word. Sometimes, a string of words sounds incredibly plausible and statistically likely, but is completely disconnected from factual reality.
• Training Data Flaws 🗑️: If the data a model was trained on contains biases, contradictions, outdated information, or outright fiction, the AI will confidently repeat those errors. It’s the classic "garbage in, garbage out" scenario.
• Lack of True Comprehension 🧠: AI doesn't understand the world the way humans do. It maps linguistic patterns, not actual concepts. When asked about a niche topic it hasn't seen much data on, it will seamlessly stitch together related-sounding terms into a convincing, yet entirely fabricated, answer.
• Flawed Retrieval (Tool Errors) 🔍: When AI uses tools to search a database or the internet, if the search returns poorly matched or incorrect documents, the AI will base its answer on that bad information.
So, while agentic frameworks and careful context management help keep an AI focused on long tasks, they definitely don't solve the core hallucination problem! Good lucky trying to run complex production workloads know that it inevitably is going to hallucinate.
youtube
AI Jobs
2026-02-26T04:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwGW59DevAiA5FdqK94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxhYLq_-WaikbyUo-Z4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz5FVa8saOozGF0XIh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwtnmMR3IebZ2jsAgB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwUCb-dkizB6hXmwsx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyenmsuFICQzXNzCWB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwtwcbttFZ0FF7u8fJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy1AycOmFPmRg9SPJh4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxIJPDbCxtvGxnIIfp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyOvc_ttRXBNZoe8pl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"}
]