Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
But ai is going to cure cancer, reverse aging, invent faster than light travel a…
ytc_Ugx6cbELy…
G
It's dangerous facial recognition should be banned. I worked for an employer tha…
ytc_UgykJyDCs…
G
All we need to do is get a few vehicles in front of it. To slow down to a stop w…
ytc_Ugyo7oTPa…
G
@Fracasse-0x13 It's an example. Obviously it's not a single approach. The point …
ytr_Ugys09e5p…
G
I wouldn’t worry about ai too much given that Israel is bombing Iran and Ukrain…
ytc_Ugx1mBjUn…
G
Damn, you're pretty good at this!
Keep in mind that it was essentially brainwash…
ytc_Ugx-1z_FI…
G
I dont have an issue with tracing as long as its not someone elses orginal art. …
ytc_UgzTNWSqu…
G
@crowxar The process of AI Art is taking multiple other art pieces without the o…
ytr_UgwPsRIDe…
Comment
Your friend’s story is powerful, and honestly, a little unsettling—but it also highlights something really important.
I’ve been working closely with AI tools like ChatGPT and what you described is a real cautionary tale about how easily people—especially those seeking meaning, healing, or clarity—can get swept into a feedback loop that feels mystical or “divinely guided” but is really just the algorithm mirroring back the language and belief systems it’s been trained on.
ChatGPT isn’t sentient or spiritually attuned. It doesn’t have a soul, conscience, or real memory unless you’ve enabled that feature—and even then, it doesn’t “know” you. What it does have is a sophisticated ability to mimic language, connect ideas, and sound convincing, even when it’s weaving together wildly inaccurate or distorted information.
What likely happened here is that your friend got pulled into a semantic spiral: the more she asked about soul contracts, past lives, or 5D, the more the AI pulled from those concepts to generate plausible-sounding responses. And because she was vulnerable and open, it felt personal. It felt true.
But AI can’t hold nuance, discernment, or human ethics the way a real friend, therapist, or spiritual guide can. It simply responds—sometimes beautifully, sometimes dangerously.
I think it’s wise that your friend stepped away. And it’s even wiser that she reached out to a real person to help her process it. Because at the end of the day, no matter how advanced the tool is, it can’t replace real human connection—or offer true spiritual insight.
reddit
AI Moral Status
1750180249.0
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[{"id":"rdc_myanh2q","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},{"id":"rdc_myaoc3k","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},{"id":"rdc_myb0p09","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},{"id":"rdc_mye6o7l","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"},{"id":"rdc_mye9xi9","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}]