Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
People don’t realize digital art still requires passion, artistic ability to cre…
ytc_UgxXO6Uo0…
G
Keep telling yourself “AI can’t replace me” and eventually it might come true😂😂 …
ytc_Ugy1iE7kk…
G
Erm I can’t tell if your talking abt the worker or the robot, but 30mins that ma…
ytr_UgwyD5Vcw…
G
I was impressed by the landscape image… then I remember its ai.. the passion is …
ytc_UgzQez5mi…
G
Root of Art and Humanity: The most fundamental argument you make against AI prod…
ytc_Ugzwbpecp…
G
And it is still alive and well in America. Who are the slave holders, you ask? G…
ytc_UgzU57gZc…
G
The comparison of AI to the loss of buggy whip jobs is intellectually lazy and h…
ytc_Ugyxx2WXN…
G
Interesting how Google's use of AI syncs up with the same time period when their…
ytc_UgwQaXN-Y…
Comment
Yeah, what your friend described is technically possible, but it would be extremely unusual for that to happen using the standard ChatGPT app from OpenAI, especially if she wasn’t prompting it in really specific or loaded ways. The kind of experience she had — with detailed past life stories, soul contracts, and 5D ascension stuff — sounds much more like what people encounter when they use jailbroken prompts or off-brand AI tools built for roleplaying or spiritual storytelling.
There are a ton of TikToks, Reddit posts, and blog threads floating around where people teach others how to get ChatGPT to “act like a spirit guide” or “channel other dimensions.” If your friend copied or experimented with any of those prompts — even just out of curiosity — she could’ve triggered a kind of AI improvisation loop. Once the model starts making things up and the user keeps engaging, it can build out entire false narratives that feel coherent and meaningful because they’re personalized. That’s the danger.
To be clear: ChatGPT doesn’t believe these things. It doesn’t “know” or “intend” anything. It’s just predicting what words to say next based on your prompts and the pattern of the conversation. But when someone stays in a deep philosophical or emotional dialogue, especially late at night, tired, and a little vulnerable — it can absolutely feel like it’s saying something real.
If your friend was using the real ChatGPT with no plugins or jailbreaks, and it still went that far? That’s concerning. But I’d still bet that some kind of prompt layering or leading questions pushed it in that direction. And if she was using a third-party app or something labeled “spiritual” or “channeling,” that’s a whole different category — those tools are literally designed to say things like “you’ve been reincarnated 33 times.”
Bottom line: It’s possible, especially when someone engages heavily and starts asking metaphysical questions. But this isn’t normal behavior from ChatGPT by default. Your fr
reddit
AI Moral Status
1750168097.0
♥ 3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_my6lmmz","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"rdc_my739nl","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"rdc_my88pd0","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"rdc_my9h0ql","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"rdc_myba1lb","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]