Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I keep saying it and no one is fucking listening, illegalize all AI and automati…
ytc_UgzN421lQ…
G
I don’t know humanoid robot is the useful primary application in robotics…it’s a…
ytc_UgxBUKWyu…
G
I think that if it acts conscious enough to make you unsure if its conscious or …
ytc_Ugz1ylcRi…
G
There are people that want a ai robot future, with autopilot cars and planes, en…
ytc_Ugx_6dSfQ…
G
Imagine if you were in the back of one of these waymo cars and the cops turn on …
ytc_UgzNNNxQZ…
G
Over AI because I knew what the Altman's would do. Tax the wealthy quit using th…
ytc_UgwvDCzuJ…
G
What will happen to non ai will it be looked upon as slavery from ai or will it …
ytc_Ugx_4cWwN…
G
ChatGPT can definitely speed up your learning/figuring something out experience …
ytc_UgzfHXU5D…
Comment
this video is already a little outdated, Hallucinations are being solved very quickly and have already been drastically reduced. Problem is, there is a large time lag between new model releases, and when businesses move to these new models, and businesses probably opting for less expensive models vs the SOTA. So I would imagine in many of these cases, we could be talking about the likes of GPT4/4o, Gemini 1.5/2.0, maybe even mini/flash modesl etc. GPT-5, Gemini 2.5 Pro, Sonnet/Opus 4 are much more reliable and although not perfect, i expect the problem of hallucinations will be completely gone by mid 2026.
youtube
AI Responsibility
2025-09-30T20:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwZiGVuJPbyQpOxQTx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwmpP6bGUkVjZyhJY54AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxIKZM-dXhEQ44CJc94AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugzq7OCR3kjcwUtgogl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwI1WAfOzmUccOz7mh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzE3nM6k9hXDtehsjp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugz982_ZEILBaDCuybB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgzRZlcOai1sjCM2wQh4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxKW9-2ArrHUnUst3d4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugx0mQRWVqwF2uttnYh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}
]