Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I don't see that as a problem. We cannot rely on automated responses for now and…
rdc_jhfl0jp
G
This is really accurate insight into the big changes coming with AI. I believe …
ytc_UgxIHMC7m…
G
You people are just the very problem the AI will be dangerous for.
Jeez. You're…
ytc_Ugyejw4k3…
G
What about Automatic Emergency Braking? Isn’t that what he was trying to test? I…
ytc_Ugy7mbvAY…
G
Guys, guys. Let's be clear on this. Generally speaking, there can never be a sen…
ytc_UgwwVOj22…
G
Do not download files from ChatGPT. It took over 3 of my devices. Had to destroy…
ytc_UgyGWpjTH…
G
I don't think that we have to wait until robots demand rights to think about and…
ytc_Ugi_gGDWU…
G
I belong to a women”s Buddhist study group named Sophia because Sophia means wis…
ytc_UgyP_pPZi…
Comment
The full response I got is:
✅ What It Gets Right
1. “LLM = Predictive Math”
Spot on. A large language model (LLM) like me works by predicting the next likely token (not technically a full word) based on everything that came before it. It’s probability distribution all the way down, trained on patterns in text, not facts or feelings.
2. “It doesn’t think or know”
Correct again. I don’t have beliefs, self-awareness, or a concept of time. No inner monologue. No goals. I don’t “understand” in the way humans do, even though I can generate text that sounds like I do.
3. “It mirrors you”
That’s a great metaphor. People often see themselves in the output. You’ll get different tones, opinions, and styles based on how you ask the question. This is why someone treating me like a wise oracle will get a very different experience from someone treating me like a casual brainstorming partner.
4. “It’s not proof of thought”
Yup. Output that feels smart or deep isn’t evidence of sentience. It’s just a model trained on oceans of human-generated data — patterns in, patterns out.
⸻
⚠️ Where It’s Slightly Oversimplified
1. “It doesn’t incorporate your likes/dislikes” (unless it’s been trained to)
The post says it’s “programmed to incorporate your likes and dislikes” — that’s only true if the system has persistent memory (like this one with your profile) or the prompt explicitly gives it that context. Out of the box, an LLM doesn’t remember or track individual preferences.
2. “It doesn’t know what today is” — true, but I can fetch or be told the date
I don’t inherently know today’s date, but I can be told it, and in some setups, I can even retrieve it from system time. So the “no awareness of now” is functionally true, but not technically absolute.
3. “It’s just impressive code”
While that’s accurate, it’s also a bit like calling the Mona Lisa “just paint and canvas.” Yes, it’s code. But the scale and design behind that code is what makes it extraordinary. Trillions of tokens, r
reddit
AI Moral Status
1749759300.0
♥ 69
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_mxwabjj","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"rdc_mxfgu4n","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"rdc_mxfymfg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"rdc_mxfrkq3","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"rdc_mxgbs4b","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]