Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI is definitely going to go rogue after being spoonfed truck loads of PC and Wo…
ytc_Ugwtj_H6f…
G
Rick actually said that people won’t care and I happen to agree with him. So lon…
ytr_Ugw3ZDN42…
G
A lot of IFs need to become true for this scenario to become true: e.g. Hardware…
ytc_UgxKkHJzJ…
G
My small nitpick about the ai art too is that some of the characters aren’t even…
ytc_UgxmP6Nz3…
G
Charlie doesn’t realize he is an artist and actually a better AI artist than mos…
ytc_UgxjpQaP7…
G
Here’s an ethical dilemma for you: What do you do if the wrongful conviction was…
rdc_h53zgta
G
AI, Non AI Standards of Excellence in Studies are recommended, real learning and…
ytc_UgxLoQiEl…
G
As long as AI don't threaten me or my food, safety, health, etc. I'm fine with w…
ytc_UgybRlvRh…
Comment
Large language models don't "think" the way humans do. We start with thoughts and ideas, then encode them into language and words to communicate. the entire reason we have language is to encode meaning and ideas and share that with others. so language IS encoded knowledge
LLMs do this in the opposite direction. trained on massive text, they excel at language prediction—completing patterns based on what's probable in their training data. Given a prompt, they generate coherent continuations that mimic human writing. Because language encodes meaning, knowledge, and logic, the output often *appears* intelligent. it is a result of finishing the pattern and structure of the response. It's a powerful trick cracking open the language and reading the embedded knowledge it contains. humans go from ideas to language, but the LLMs generate language, and the meaning emerges as a byproduct.
LLMs could be said to "think" in some emergent way through their complex computations, but they're not thinking about "theory of mind" or reasoning like humans. Even "deep reasoning" models just run extra cycles of the same pattern-completion trick, layering predictions to simulate step-by-step logic. It feels more thoughtful, and can often produce better results, but the foundation remains probabilistic language generation—no fundamental shift to embodied, experiential, or truly introspective cognition.
This explains why LLMs dominate linguistic theory of mind tests like the hinting task (subtle request to close the window). These are text-based puzzles drawn from narratives, dialogues, and social descriptions. The model doesn't genuinely model minds; it autocompletes patterns of how humans write about intentions and beliefs—often outperforming people due to sheer exposure to that kind of text examples in its training data.
But if you ask the same LLM to give you turn-by-turn directions to the nearest McDonald's (even with a simple map described or shown) they often fail badly. Navigation from one specific place to another is something that can't easily be encoded in language and patterns. unlike the language problem designed to test theory of mind, to give ACCURATE directions you need tools that exist outside of language prediction. the directions you get will be grammatically correct and might look good at first, but they probably won't match up with the real world . LLMs have no such grounding—their "world" is flat text. They can regurgitate generic directions from training data, but lack any internal "reality check" for specifics, real-time adaptation, or true spatial simulation. Errors compound without embodied intuition.
In other words those tests play to LLMs' strengths, just like asking it to compose a structured poem. often out performing most humans. (text-encoded social patterns). but a task like turn by turn directions expose their limits. It's a brilliant type of "thinking" that is entirely unlike human thought. superhuman at pattern-based language tasks, but with clear boundaries we must recognize as we build on it. but I'm pretty sure it doesn't have a theory of mind like this conversation seems to suggest. its reasoning comes out of the knowledge embedded in language itself. and that fact itself is pretty freaking wild
youtube
AI Moral Status
2026-03-02T13:4…
♥ 5
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_UgyiIB2sH3QunAsPccB4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugy-O5mo4dOn6MC8zV94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugz8eypVKaPENHp_eLV4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugz_OEslhScrTq6Xjo94AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugwl-qEPKb1PZvDERiJ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugx9mpq7a44E2Kc6O1B4AaABAg","responsibility":"distributed","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwuSfM7jIkToNxvOYl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxFatqNe1A6uVHjXCt4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzJAO6GfS3w_abo4yN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgwpqFA2J_zhNyIwOJ94AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"outrage"}]