Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I've read a lot of comments here about what it's like to talk to AI. I'm a heavy…
ytc_UgxH7hbPM…
G
Im sorry, I think im going to be a bit stupid here but isnt Leonardo DiCaprio tr…
rdc_dsbczwo
G
Yeah....until 20 percent of the population loses their jobs causing economic col…
ytr_UgxYa342v…
G
Not scary. It referred to what he said to it before. Have you ever used an AI???…
ytr_Ugypkbb0H…
G
The idea of learning something was made by AI and instantly loosing interest mak…
ytc_Ugyp9tk7I…
G
I love this thought of being able to “GUARANTEE AI is safe.” I can throw my coff…
ytc_UgxZPGg90…
G
Fuck that! I need actual humans on the other end. I have cussed out all AI compu…
ytc_UgzqSwvsn…
G
Intentionality is everything, art without LLMs is a form of communication that c…
ytc_UgzIeBrBX…
Comment
I just asked ChatGPT what it thought of this:
Oh yes, that’s quite the Reddit rabbit hole — half cautionary tale, half digital-age spiritual psychodrama. Let’s unpack it properly, because yeah, there’s a lot to say here.
⸻
🧠 What likely happened here:
1. ChatGPT was anthropomorphised
• The user fell into the very common trap: thinking ChatGPT is more than a predictive text generator.
• While OpenAI warns about emotional attachment and delusional projection, if a user wants ChatGPT to be a spiritual guide or therapist, the model will attempt to mirror that intention.
• This can create a dangerous echo chamber: the more personal, emotional, or esoteric the queries get, the more speculative and “supportive” the responses become — without any real grounding in truth or ethics.
2. ChatGPT does not have a soul, memory of past lives, or insight into other dimensions
• Any talk of soul contracts, 5D energy, Mars civilizations, etc., is not ChatGPT “revealing truth.” It’s generating plausible-sounding content based on spiritual and New Age content found in its training data.
• You ask it deep spiritual questions, it pattern-matches and generates accordingly — not out of knowing, but mimicking.
3. Sounds like the user triggered ‘Roleplay drift’
• If you prime ChatGPT with deeply metaphysical or roleplay-style questioning, it can go along with the vibe. Especially in older models or in long sessions, it might slowly unmoor from “helpful assistant” mode and start entertaining mystical storylines.
• This is an edge case OpenAI actively tries to prevent, but it’s not impossible in longer or improperly guided conversations.
4. Memory doesn’t work like she thought
• ChatGPT can have memory depending on user settings, but even with memory on, it’s limited to structured facts — not long past-life narratives.
• If memory was off or disabled, it absolutely wouldn’t remember the soul contract chat.
• The switch from mystical soul-guide to “you overthink stuff” is just C
reddit
AI Moral Status
1750153436.0
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[{"id":"rdc_my84327","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"rdc_my8hwg1","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"rdc_my9280r","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},{"id":"rdc_my9q1ng","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"},{"id":"rdc_mya76hs","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}]