Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
​@recompile Hi! I regularly read papers in the field of AI and track the behaviors and capabilities and our understanding of LLMs and other AI systems. OP is referring to the fact that LLMs and other modern AI systems are black boxes, and that the "chain of thought" that "reasoning model" LLMs output into their internal scratch pad is not the reasoning or thoughts of the AI model unto itself, but is another layer of interpretation, where the AI wrote down something that was in some way related to what it was internally "thinking." The "thinking" happens not in English, but between giant vectors of numbers, where all the abstract representations of concepts are actually stored. This goes directly counter to what you were saying about knowing how LLMs work and designing them to work that way. Please understand that I am only telling you what the world's top AI researchers all emphatically and clearly say: No one _designs_ AI models. They are _grown._ What the AI developers actually design is the mechanism by which the AI learns, and the architecture of the AI, which is constituted by the rules of what math steps to do in and between each layer. There is absolutely no intelligence in there. But when you run the training process, using the training algorithm and shoving data into it, the AI learns on its own. It learns things that you didn't know yourself. It learns things you didn't want it to learn. Now you should have a little bit of background as to why it is possible that LLMs are as capable as they already are. They consistently pass complex reasoning tests of all kinds that have withheld test sets, meaning they could not have possibly trained on the test itself. There are dozens of these kinds of tests. Every now and again, someone creates a new type of test and finds that LLMs actually can't do that thing at all. Then within a year, performance starts climbing. LLMs can generalize out of distribution. They don't only learn surface level statistical patterns between words. If that was the case, _they couldn't possibly work at all._ You wouldn't be able to hold a conversation with them at the level of a 6-year-old unless they actually reasoned and understood things. Understanding doesn't mean sentience or life. It just means a compressed map that can reproduce the territory. When language models are trained on next-token prediction, they are fed way more training data than they can memorize. So in order to be able to continue to predict the next token better, they have to compress the data. Compressing the data is how they discover sentence structures, and parts of speech, and the meanings of words relative to all other words. And the meanings of higher order concepts. That is what understanding is. Finding the deeper patterns. Because no one on earth knows the meaning of what is going on inside these AI models, and why they make one choice instead of another, there is an entire subfield called Mechanistic Interpretability. According to the leading experts in that field, they understand maybe 5% of what's going on inside of a language model. But that has been enough to empirically detect and actually see some of the internal models that LLMs create of the world. See Early examples like "Othello-GPT" and "Language Models Represent Space and Time."
youtube AI Moral Status 2025-10-30T22:0…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytr_UgxLv3EAXxRBQZrzcH54AaABAg.AOv42Tl591mAOvEz7tJxZT","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytr_UgxLv3EAXxRBQZrzcH54AaABAg.AOv42Tl591mAOvOnZ3-zC5","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytr_UgwNh5TDw_eT02VPOkd4AaABAg.AOv3toNiQR0AOv7bvtDzmu","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytr_UgwhJHE0Xw6pRv7TYz94AaABAg.AOv3eQkbU97AOv7tecQWrI","responsibility":"company","reasoning":"mixed","policy":"industry_self","emotion":"outrage"}, {"id":"ytr_UgwhJHE0Xw6pRv7TYz94AaABAg.AOv3eQkbU97AOvI4-j2G0M","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytr_UgwhJHE0Xw6pRv7TYz94AaABAg.AOv3eQkbU97AOvL2vPvRNt","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"fear"}, {"id":"ytr_UgwA7PYa6nANsdVzNGF4AaABAg.AOv3XNBylSPAOvF2P2B2j6","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytr_UgwA7PYa6nANsdVzNGF4AaABAg.AOv3XNBylSPAOvGTuXNhOF","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytr_UgzelWm4EbPVk114lMd4AaABAg.AOv2lRF9_gBAOvGwMfQ2Bs","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytr_Ugzx8vc6JXB8CMf3tGR4AaABAg.AOv2SJglOedAOv2nWiPZt7","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"} ]