Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
ELON MUSK, please consider this: The complexity of Artificial Intelligence (AI) …
ytc_Ugw4HbqF_…
G
Ai Gets Crazy. My grandparents don't know anything about what is real and what i…
ytc_UgyhadK55…
G
Artists, please.
They only talk about the damage suffered by artists, so they m…
ytc_UgxxTt8Zq…
G
All the people who are saying that AI is better than paying artists, again, wher…
ytc_UgxWcxvol…
G
Should be regulated. If it’s AI should have a note at bottom of shot that says …
ytc_UgyDJ62V6…
G
using prompt injection with a language other than english is a massive open back…
ytc_UgwFIiLo1…
G
Tomorrow's headlines:
"Covid has just merged with Meta. It will make you very s…
rdc_hm8p84z
G
Also, we don't have free floating antibodies for that long do we? It's memory ce…
rdc_g9t84et
Comment
We do form associations our entire lives but this is an oversimplification.
Humans are capable of more granular associations, create narratives around those and use those narratives as heuristics in future experiences, and reevaluate the narrative if we find out we are wrong.
Agents are kinda closer to this but still nowhere close.
When we say "understand" as humans, what we mean is that we have engaged with a topic enough to have narratives about the topic that accurately map to real life, which we can then take and apply to not only that situation, but use as guides in other similar situations, or even sometimes vastly different ones.
This is not how AI works at all. And thus, it does not fit what we would call "understanding". It can be useful and can often give accurate information, and you can use agents in order to double check that information somewhat. But it is still fundamentally a different process.
We don't work in weights we work in stories. Muscle memory is what we call working in weights. And AI does a really good job of emulating that or even occasionally surpassing us due to ability to iterate quickly. But there's something missing, and that something is what we humans call "understanding". We humans do not understand what understanding is but we understand LLMs enough to know they don't do it, at least not yet.
reddit
AI Moral Status
1750970052.0
♥ 8
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[{"id":"rdc_mzxydd2","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"rdc_mzze4ci","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"rdc_mzy57g2","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"rdc_mzy49yr","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"rdc_mzys3m7","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}]