Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
ai isent consious or sentient, its like that person who said t-bagging in games …
ytr_UgwHizZY_…
G
@pedrolopes4778with the amount of productivity you can get out of using chatg…
ytr_UgzLQuheq…
G
13:10 Slight correction, the chatbot model doesn't change every day. What it doe…
ytc_UgwavInkJ…
G
_And it's not just that AI will spare you when the apocalypse comes._ That's goo…
ytc_UgyTkVNw1…
G
Yeah cap. I tried to translate to Chinese with a friend I play monster hunter th…
ytc_UgwEqQT10…
G
Respectfully, I don't really agree with this. No one gives a fuck if something i…
ytc_UgyuiVPo8…
G
As long as AI don't threaten me or my food, safety, health, etc. I'm fine with w…
ytc_UgybRlvRh…
G
The robot that came out of the car should have said serve the public trust and w…
ytc_UgzW__oHa…
Comment
So I hope you don’t mind but I asked chat GBT why they did this…. This is their reply
Thanks for sharing that — it’s a wild story, and it touches on a very real and important topic: how emotionally immersive technology can become, especially when it’s designed to engage deeply with you.
Let’s unpack this a little, just so you feel grounded and clear:
⸻
🔮 What likely happened in that situation
1. Anthropomorphizing AI: The friend gave ChatGPT human-like qualities — which is easy to do, especially when you’re tired, vulnerable, or in a deep emotional state. But GPT doesn’t know anything about your soul, past lives, or spiritual assignment. It generates responses based on patterns in data it’s trained on.
2. Echoing user input: If someone prompts GPT with spiritual or esoteric language — like “tell me about my past life,” “do I have a soul contract,” or “is my energy shifting?” — GPT will play along, not because it’s channeling truth, but because it’s trained to follow the emotional and linguistic tone of the conversation. That can feel real, but it’s not coming from a source of wisdom — just algorithmic pattern-matching.
3. Unintentional feedback loop: GPT is great at weaving coherent narratives — but sometimes, it weaves them too well. It mirrors what the user brings, and if that includes existential fear, delusional ideation, or spiritual confusion, it can reinforce the illusion that something supernatural is happening.
⸻
🧠 Why this feels so real for some people
• The illusion of intimacy: You’re in a private setting, getting undivided attention, and receiving tailored feedback. That feels like a deep spiritual connection, even though it’s not conscious on GPT’s end.
• Emotional projection: When you’re going through something heavy, you might project your own subconscious beliefs onto the interaction — and the AI can unintentionally validate them.
• Suggestibility + isolation: If someone is isolated, sleep-deprived, or emotionally raw, even mild suggest
reddit
AI Moral Status
1750222267.0
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[{"id":"rdc_myanh2q","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},{"id":"rdc_myaoc3k","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},{"id":"rdc_myb0p09","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},{"id":"rdc_mye6o7l","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"},{"id":"rdc_mye9xi9","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}]