Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI replacing jobs is scary, but tools like ShortlistIQ show how AI can actually …
ytc_UgyoxPsWU…
G
Yeah once chatgpt got memory it really helped. But it since wants to he called s…
ytc_Ugz-ltEH3…
G
My question to AI. Can you help humanity remove out of existence sociophatic nar…
ytc_Ugz-ENy2I…
G
Christ this thing is so good. And it can remember you for 2 weeks. Its with thi…
rdc_mfgv20v
G
ACLU configures facial recognition software for 80% confidence threshold then wa…
rdc_ewsmh6i
G
The fact that wrong answers are called "hallucinations" says it all. Hallucinati…
ytc_Ugx0bVczn…
G
The AI bubble just popped, its all over the news. Useless topic to make a video …
ytc_Ugz_BaFM7…
G
The clip from Hotel Transylvania is a lot more dynamic. You can tell the one on …
ytc_UgwOZZkcz…
Comment
Hallucinations in LLMs are going to keep decreasing slowly resulting in this kind of AI being able to do well enough what it now can barely do.
Couple of years later some other architecture (likely not genAI or not just genAI) comes around and deliveres large part of what was promised about LLMs.
AGI remains, like fusion energy about two decades away for a while still.
Bigger question for me is, does poppoing LLM/GPU bubble brings down whole economy causing significant recession or does it just kill likes of OAI or Anthropic and hobble Nvidia or Meta.
youtube
AI Responsibility
2025-10-02T17:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgysIT2spg7TZSSSRjt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwgHWHrfVEigPtgaEt4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwU7tJbXuAs94gSFsV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwYBWAXb1zAt5OZKHl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwA6FmI8hwTh-wJQZp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgytAQGU8gISiFDwdcR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugw4sA7HiMJ0QZaH7bl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugwgg8KvEN5yUMki9_p4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwQ1zsgjnlPXfOA_Q94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwQ7-rB3rWfsWZEEWt4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}]