Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Someone used AI to create flappy bird and tetris and they made BOTH games in les…
ytr_UgwkGgVF-…
G
amaxiing ep. by the way, I don't expect GPT to beat Steven in this generation ha…
ytc_Ugx6sYxf1…
G
You don't have to worry about chatgpt. The government is spying on you already 😂…
ytc_UgwyF1JOb…
G
No, I disagree. I’ve been making art for almost 40 years, professionally for ove…
ytc_Ugz2MsX6C…
G
The masses are not prepared for AI and automation. Just take them out and shoot …
ytc_Ugzhv7dHJ…
G
I'm in translation and this shit happens here too... They translate with AI and …
ytc_Ugx7rJclX…
G
These are the people from terminator. He’s knew it was wrong but did it anyway .…
ytc_Ugw-Pj3m2…
G
Lost respect for artists? God damn, AI uses artists creations to amalgam its own…
ytc_Ugw9m-Ggy…
Comment
The hallucination issue (circa 17%) will never really be solved at scale.
In order to refine a RAG model to an acceptable low hallucination rate to need to narrow its scope, making it less general by definition, making it less usable for questions not related to its training data.
Statistically impossible for a large scale LLM to remove hallucinations because it is incapable of determining what is true, so effectively you'd need constant human in the loop hygiene of all 3 trillion data points.
LLM's are at a dead end in terms of significant improvement from here.
youtube
AI Responsibility
2025-12-15T16:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgzAyB4UkzQDPD1dT5Z4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgyJdQc7VpWLV7Obpax4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},{"id":"ytc_UgyWyASXvzut5TnPQrp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},{"id":"ytc_Ugy0gvvEv-uNvolHgHZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},{"id":"ytc_Ugwd7PHqixPGiU76k9t4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"frustration"},{"id":"ytc_UgyA-KhnhLpiJ1ZyC1h4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},{"id":"ytc_Ugz_D2Nnqh5G0siPE5B4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgwFJbtsw0d_mVbzX5x4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},{"id":"ytc_UgwGvMj1O2A0X7sefx54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},{"id":"ytc_UgxJo3hRkgUEbNPDovl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}]