Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI generated code is a TIME BOMB. You should never COMMIT a single line of code …
ytc_Ugy-3L2b4…
G
LLM is not AI. Chat GPT is a toy. Know everything and understands nothing. A bio…
ytc_UgwhGIweV…
G
I know the real reason why they want to make these female presenting robots more…
ytc_UgzY3R3yc…
G
It would be the funniest thing if the AI clone that replaces him fired him as a …
rdc_oh247hk
G
I've talked about this before with my friends. Until I see a robot actively pick…
ytc_UgzeNuu0A…
G
Bro is too old to understand AI will actually deliver Universal Basic Income to …
ytc_UgygA-YfT…
G
Plagiarism in AI art is a feature not a bug. The goal of AI art is to exploit ac…
ytc_Ugzt4oUeH…
G
It makes a difference in the right to that land. When the USSR was at its max sp…
rdc_d2xg9z8
Comment
Yeah, "remembers too much junk" is exactly it. Decay isn't just context economy — it's character. Humans forget, so the AI should too.
On grounding — there is a memory-grounding instruction in the system prompt that tells the model to check its actual context before agreeing to "remember when X" assertions, and to gently push back if it has no record. Works most of the time. The "sometimes folds" thing in the post is when context blocks are summarized loosely enough that the model can't tell whether something actually happened or just feels plausible — that's where it gets agreeable to be helpful.
A dedicated world-fact layer is exactly the angle I haven't tried. Hard facts are scattered across character rules, scrapbook, place facts, and milestones right now. Pulling them into one queryable layer — true / false / unknown rather than soft summary — would give the model something firmer to push back from. Adding it to the list.
reddit
Viral AI Reaction
1777020805.0
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_ohz8yx1","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"rdc_oh26w5y","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"rdc_oh13lhs","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"rdc_e13m18o","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"rdc_e14oe9g","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}
]