Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
**You’re not wrong that LLMs are trained to predict the next word, and don’t have consciousness like a human - but it’s an oversimplification to say “that’s all it is.”** LLMs have *emergent properties* that weren’t directly programmed in: they can reason, abstract, reflect, and even teach new concepts. The “mirror” effect isn’t just parroting - it’s building on a massive, distributed knowledge graph learned from language itself. Modern AI systems layer in memory, tools, and even ongoing context, so “no memory” isn’t a hard limit, it’s just the basic model. Dismissing LLMs as “just clever code” ignores how *all* intelligence, even in humans, emerges from complex patterns and statistical processes - brains included. The output isn’t magic, but it’s more than simple mimicry. If you want to see what an LLM can do, don’t treat it like a calculator - treat it like a mind-in-progress. If all you see is the shallow end of the pool, it's because that's all you offered to work with. TL;DR: Yes, it’s code. So is your brain. The difference is in degree, not kind.
reddit AI Moral Status 1749868805.0 ♥ 3
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionapproval
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[{"id":"rdc_mxoadof","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},{"id":"rdc_mxoe1av","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},{"id":"rdc_mxqfdon","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},{"id":"rdc_myc01re","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},{"id":"rdc_myo4gd2","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}]