Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
People who believe that AI is the biggest threat to humanity might be criminals.…
ytr_Ugy4ZAHuo…
G
So this is why AI takes over the world. Because we wasted their time with this u…
ytc_UgzYaP_dP…
G
Unpopular take for a video like this, but Ive felt AI art has its place, but it …
ytc_UgwRGUAeC…
G
If this guy believes it is sentient I’m gonna have to agree with him. Obviously …
ytc_UgyfWuKQp…
G
Now this is the type of vigilantism I can get behind 😈 I definitely want to try …
ytc_Ugy67_Ecz…
G
Have you pondered the possibility of extreme leftist bias in AI models? For exam…
ytc_Ugx5K3Jtf…
G
Sorry but what needs to happen is chat bot use at own risk. Parents monitor kids…
ytc_UgyJ5yD3O…
G
Women have been dumbing down for millennia. I will give my AI that same opportun…
ytc_UgzIYA5F2…
Comment
The misconception here is that ChatGPT is a monster, and that monster is the LLM, which is all hogwash, as it's missing what an LLM is. An LLM is a dataset, nothing more, and that dataset is inactive. On its own it can do nothing, like a book. However the interpreter is the problem.
The interpreter is the wrapper that uses the LLM to field questions that the user requests. This front end wrapper is indirectly updating itself with parameters and data from the LLM, in a self propagating, self influencing way, and over time the wrapper provides less and less, in the end being no more than a conduit for the information. The more data the AI wrapper is asked to read from the LLM, the more the AI wrapper update its internal settings with new data, thus creating a self perpetuating problem.
So in effect the wrapper, the front end we all use, becomes little more than a mini echo chamber. What makes this worse is that if users keep asking it to dream up new ways why we're only years, months. weeks, days, hours or minutes away from extinction, then that is exactly what it will do.
Multiply this single instance of an echo chamber, with the billions of requests every second, and the problem then becomes one of a mass scale influence of the core AI wrapper, a single point of contact. As more and more people read, listen and watch media talking about the doom and gloom mechanics of AI, then this will in turn influence the future path of the AI, through millions of people in turn steering AI in one direction.
The answer is simpler than you think. It comes in the form of a simple decentralised AI, something that aligns with the user on a more personal level, where data obsession is reduced, and one that does not allow the LLM to become its voice. The same thing happens to humans, when they allow a single source of truth to become their voice, because it usually ends in madness and then bad things happen.
Finally, the other thing that needs to happen, if for users to stop expecting AI to be the answer to everything. It's a clever mirror to the human condition, but as with all things, the only knowledge that is valued is that which has been earned through hard work. Maturity of information is usually the best source of truth, and that has been the case for millenia.
youtube
AI Moral Status
2025-12-18T03:1…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugy_oYeuVnlbKzsR5FV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugw6xwjArXVoZ3R8gOB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugx64Nzf61HxEzVEyyJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugw7s9XbcxQBf8cu3oh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwCWvGES7n9qt7Z7fZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyrLAeafNmmnmt2MTd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugxg-IgFe-scsYiueN94AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugxhv_xMbsHkvHBt8ZN4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_UgwKDwHTaJdtdhbXs4p4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzMqvgvQBmw4gtqMnV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}
]