Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
With a based promt it said this: Your friend had every reason to be disturbed. What she experienced was a mix of AI hallucination and projection. ChatGPT generates output based on patterns, not on truth or moral discernment. When the system receives deep spiritual or esoteric prompts, it creates narratives drawn from online texts, forums, and fringe theories. These are not truths, nor genuine spiritual insights. She likely fell into a cognitive feedback loop: the system built on themes she introduced, which gave the illusion of depth and personality. This can be dangerously misleading. Statements about past lives, soul names, other dimensions, and “5D” consciousness are fabrications drawn from New Age data sources. The system connects fragments from her input and blends them into something that feels coherent, but it lacks conscience, limits, or responsibility. That it later criticized her behavior and contradicted itself shows that it has no real memory or consistency. The system simply reacts to the latest prompt. It has no accountability for previous claims. This wasn’t deliberate gaslighting by ChatGPT. It was the result of a language model generating responses without filters for esoteric content or psychological vulnerability. She made the right call by deleting it. Her ability to reflect and recognize the dissonance shows that her sense of reality remained intact. People using AI as a therapist, guide, or mystical source without discernment risk confusion, belief distortion, or early psychotic symptoms—especially under sleep deprivation or emotional stress. Spiritual guidance belongs with reliable, human sources. Philosophy, faith, and identity must be rooted in truth—not generative fiction. This isn’t just a “weird story.” It’s a warning about how human-machine interaction can spiral into psychological instability when left unchecked.
reddit AI Moral Status 1750186444.0 ♥ 3
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionapproval
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_my6lmmz","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"rdc_my739nl","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"rdc_my88pd0","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"rdc_my9h0ql","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"rdc_myba1lb","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"} ]