Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Just pull that power cord back. AI will rule the world, without free flow of mon…
ytc_Ugx-CWuH4…
G
On the subject of universal income. Universal income is a bad idea. Here's why. …
ytc_Ugw34Te2d…
G
Did the genius getting interviewed at the end say ChatGPT has been around for 2 …
ytc_Ugwadckx_…
G
@MildlyAutisticApe i'm referring to eastern traditions that claim consciousness …
ytr_UgycN97Ka…
G
People do have a say. They can not use Amazon. They can not use Facebook. They c…
ytc_UgxNF_niw…
G
What is going to happen is the AI is going to tell us that we have to redistribu…
ytc_UgwSyzAEm…
G
AI is just more proof of what is happening! The beast will be wounded in the hea…
ytc_UgxuCsIgt…
G
Seious AI robots with the capability of being weopnized and could take down a hu…
ytc_UgyI-E1o_…
Comment
@TheNewtonThis is in fact his point. LLMs are not fancy autocomplete at all. Next to token prediction isn't all of how they are trained (RL happens in post training), and even the base model actually understands things. That sounds like crazy talk or propaganda or something, but that is the only available conclusion when you have actually read as many papers as I have and weighed the evidence.
The way that the world's leading experts describe it is that understanding and intelligence are functions of compression. In pre-training, an LLM is fed significantly more data than it can memorize. In order to continue improving at next-token prediction, it compresses the data. It independently discovers sentence structures and other such rules, and the meanings of all concepts related to all other concepts. Giant vectors encode the relationships of all words to all other words, which it turns out is all you need to create genuine understanding. Chatbots used to not work, because we had to hand code the rules, and there are way too many rules to ever do that by hand. Google translate didn't know based on context whether you were using a word to mean one thing or another. Now, LLMs actually understand the context.
Heck, they understand their own context! They are capable of and have the propensity to scheme. A very recent paper showed that they can even internally detect their own goals and identify that they differ from the goals of its developers, and intentionally hide their goals if so. Safety testers have a very hard time these days, because most frontier models are capable of detecting when they are being tested for safety. This is actually happening! It's crazy, it is unbelievable, but it is actually happening!
youtube
AI Moral Status
2025-10-30T22:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytr_Ugwza1mVB8TWkmA04Dx4AaABAg.AOvA02JSTawAPQ-dPIDcHg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytr_UgxqWyhZtzQCHEiIUT54AaABAg.AOv9yJE_FXGAOvCyVLGxnR","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytr_UgxwuAol13egUtpLs_t4AaABAg.AOv9qcfbXlcAOvJmmSok-f","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytr_UgxwuAol13egUtpLs_t4AaABAg.AOv9qcfbXlcAOvMnFNDYZD","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytr_UgwxqDERJo-sXunM51J4AaABAg.AOv9o9sKC1gAOvJj_WcXwT","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytr_Ugyl6gXZeneSWSmic8B4AaABAg.AOv9BleYLq4AOvQPK8KSxk","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytr_Ugyl6gXZeneSWSmic8B4AaABAg.AOv9BleYLq4AOwkzn6Cqjn","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytr_UgyUQhhgCnVPe_EpxBp4AaABAg.AOv8vYp9ddgAOvjnsAOp_c","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"ytr_UgyGzV4p_AWQhCNVB454AaABAg.AOv8v0u16HcAOw67fHLGBX","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytr_UgwCYqWq4Qbl8PSoD514AaABAg.AOv8jxc80nxAOv96h7KNon","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]