Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The short memory or window size you mention is not only cause of hallucinations, it is the intrinsic nature of LLM, here more about it: The Probabilistic Guessing Game 🎲: Like the video mentioned earlier, LLMs are fundamentally predicting the most mathematically likely next word. Sometimes, a string of words sounds incredibly plausible and statistically likely, but is completely disconnected from factual reality. • Training Data Flaws 🗑️: If the data a model was trained on contains biases, contradictions, outdated information, or outright fiction, the AI will confidently repeat those errors. It’s the classic "garbage in, garbage out" scenario. • Lack of True Comprehension 🧠: AI doesn't understand the world the way humans do. It maps linguistic patterns, not actual concepts. When asked about a niche topic it hasn't seen much data on, it will seamlessly stitch together related-sounding terms into a convincing, yet entirely fabricated, answer. • Flawed Retrieval (Tool Errors) 🔍: When AI uses tools to search a database or the internet, if the search returns poorly matched or incorrect documents, the AI will base its answer on that bad information. So, while agentic frameworks and careful context management help keep an AI focused on long tasks, they definitely don't solve the core hallucination problem! Good lucky trying to run complex production workloads know that it inevitably is going to hallucinate.
youtube AI Jobs 2026-02-26T04:0…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgwGW59DevAiA5FdqK94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxhYLq_-WaikbyUo-Z4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz5FVa8saOozGF0XIh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwtnmMR3IebZ2jsAgB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwUCb-dkizB6hXmwsx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyenmsuFICQzXNzCWB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwtwcbttFZ0FF7u8fJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugy1AycOmFPmRg9SPJh4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_UgxIJPDbCxtvGxnIIfp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyOvc_ttRXBNZoe8pl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"} ]