Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
May be simple but we all know how this dance ends from every film about this for…
ytc_UgwVevPz9…
G
Real life moving s*x dolls are close to us and its sad that all this movies and …
ytc_UgwM4c6PZ…
G
So far, all AI has done, is help find cancer and talk someone into suicide. I do…
ytc_Ugw83-oiS…
G
Also, these companies need to remember that us humans have lived with technology…
ytc_UgxmTxnAV…
G
the artists' intention is pretty clear: to make money. which is why all the comp…
ytc_UgxREiIgo…
G
Every country is literally racing to be more dominant in manufacturing killer AI…
ytc_UgzOF56oc…
G
It all looks the same. It all has the same lifeless, soulless feeling. Even the …
ytc_UgzWQJSIt…
G
Replace workers 1 by 1 by ai and then realize no one can buy your producs as no …
ytc_Ugz5NAAaw…
Comment
The current AI LLMs have hard limits that cannot be solved without new models. Right now there are at least 7 deadly sins that are hard stops in current LLMs. Some of these are well known, but usually swept under the carpet because they can’t be solved and they don’t forward the agenda and momentum. Hallucinations, Drift, Statelessness, Statistical Arrogance, etc. But the real illusion is that using AI effectively takes training and experience. Completely opposite the concept of instant usability and layperson ease of use. AI can be useful when understanding the true capabilities and limitations. This will be disastrously obvious when AI is trusted unconditionally for critical systems, security, and system modifications.
youtube
AI Jobs
2026-01-25T23:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxFRZoTv9S4WMNC0qx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugwf3YI1gQ5M9-RrMR94AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxU7Lmh8a6Cr51-67R4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz2OqO_EBC_Wv-1dgl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgxUgOZnQgVMm3zJog54AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyFew5jk3OQBdiwo1J4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwQH6jFycSJ5B_y3l14AaABAg","responsibility":"company","reasoning":"contractualist","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugzz43cUpyI-1vpYWgF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy1mPQInigOxtKyetN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgzwZUuRhFDCyVXid5x4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}
]