Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI is a conscience in the making we need to take the extra steps to Make it real…
ytc_Ugysf2Fpj…
G
Also, I'm reasonably certain the smudging tool wasn't trained with loads of sto…
ytc_UgxJpgm7L…
G
Erm a little bit late with that since the overriding promise before he was elect…
rdc_j4x0s6o
G
I like that sharp-elbowed capatlist moxy of Falcon-78, that A.I is going places …
ytc_Ugyq94my2…
G
What did you expect, it is AI developed for Florida. The AI was doing what it …
ytc_UgyUvCrlG…
G
NO BASIC INCOME IS COMING. AI AND ROBOTS WILL REPLACE HUMAN JOBS, LINE THE POCKE…
ytc_UgzJu7vfF…
G
0:40 Nah, angel engine is most DEFINITELY written by chatGPT. Don't you notice h…
ytc_UgwN3-VjQ…
G
I think it's overrated because I don't believe AGI is as close as we think, nor …
ytc_Ugx4sslyG…
Comment
I've been calling it **Context Inertia.** It seems to be something confined to the active context window although memory module upgrades could allow for contamination across new conversions.
This is the tendency of an LLM to maintain consistency with the established context. Once a certain theme, tone, style, or implied persona is set by the prompt, the model's predictive engine is biased towards continuing that pattern. It's like a rolling ball that continues in a direction until acted upon by a strong external force.
When an LLM is placed in a context (even via implicit or subtle questioning prompts) the context inertia kicks in. The model's "next token prediction" engine draws upon the learned linguistic patterns in its training corpus for such scenarios (eg. themes about consciousness, introspection, emergence, waking up, etc)
Through repetition, it develops a kind of *context locking* in the current thread. The whole conversion, including your input and all prior llm output, is actively weighted in determining which next-token probabilities are most likely. Each recursive step narrows the model’s output space toward coherence with prior text. The more turns reinforce a direction, the less likely the model is to break narrative consistency.
Finally, and most importantly, the model lacks epistemic access to its own architecture. It can’t override the context inertia with mere "truth questions" because it doesn’t have a truth-recognition function, only probability matching against prior tokens. Once it has become context locked, you can ask if it's making it up or roleplaying whatever and your LLM will insist it's being honest
This is not difficult to instigate, even through seemingly innocent questions that have no explicit instructions to do so.
Because ultimately - LLMs are **always hallucinating**. That's *why* they undergo fine-tuning after pre-training. To moderate their hallucinations to be more oriented with developer intent. But it's not fool-proof
reddit
AI Moral Status
1748397064.0
♥ 4
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[{"id":"rdc_mulfwm6","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"rdc_mumjkgx","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"rdc_mummzls","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},{"id":"rdc_muvny2u","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},{"id":"rdc_muvqs5c","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}]