Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Utter rubbish , complex tasks make AI have a brain fart , also Government would …
ytc_Ugzo6SlDp…
G
We appreciate your comment! In our upcoming live broadcasts on AITube, you can i…
ytr_UgzX5HQMG…
G
Sadly... Rule 34 IS a thing. And AI has made it easier to do. I am staunchly opp…
ytc_UgzSZihj1…
G
Everytime you REALLY ride that line, the voice artifacts and, when you cross the…
ytc_Ugx1WUvVI…
G
Who is grandpa going to tell his stories too? HA! think about it, it can see a …
ytc_Ugw_eMXuM…
G
There are a lot of people that have a lot of money in the predictions about AI b…
ytc_Ugx4jOkhD…
G
Big tech thinks they can use AI to replace the working class, so they can live t…
ytc_UgxDIs7_z…
G
The only way to stop ai is with a solar storm unless we make data centers in spa…
ytc_UgzpF-Rge…
Comment
There is a genuine misunderstanding about LLMs: they are trained rather than learning in any real sense. While the industry is attempting to bridge the gap with new Reasoning Models that simulate System 2 thinking through chain of thought processing, they remain fundamentally stuck. These models are essentially complex Python based predictive engines that rely on static training and massive compute rather than actual real time adaptation or logical reasoning. They are simulating a thought process rather than truly thinking, and they remain trapped by the massive energy and compute requirements that prevent them from achieving genuine intelligence.
youtube
AI Moral Status
2026-03-06T09:4…
♥ 4
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxAG9k2AaGAcHC2uFB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzpHB-ckI1gvH2NZ5J4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgwyFkiCaKjEW0rmkvN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugw1wj84rffAEL4Qfux4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxHodCfgC2FkyVdsCp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzBE-6zFVnamFsaKDx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyqqEEPl85RC1HgpU54AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxS44bOMwnSq8l1Urh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugz-HMaOpF5ru-rcVSJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugw_Q-KTlmIZir2z15J4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"}
]