Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Yeah we don’t need AI for creativity, we need it for menial tasks that take up t…
rdc_l9w8zy0
G
The AI problem is fabulously discussed! Truly Biblical. Ask AI how it will becom…
ytc_UgwMzEpqr…
G
Surveillance Capitalism is not the level at which the core of good/evil exists (…
ytc_UgybH1XVs…
G
My former employer manager and boss said automation and A.I. will take over your…
ytc_UgyElR0b7…
G
The risks associated with artificial intelligence are real and increasingly ackn…
ytc_UgykabLBy…
G
'what version of python do you use' these types of questions are completely irre…
ytc_UgxnXxPOl…
G
After offering all the gratitude & pleasantries,
ChatGPT: “Let me start by taki…
ytc_UgyCDHhTE…
G
Its a nevitable suicide of world order as we know it , if Mag7 spends for AI and…
ytc_Ugy8FUJhg…
Comment
It’s always interesting when someone tries to reduce the complexity of GPT and public perception into just two categories: those who “get it,” and those who are somehow deluded by the “Kool-Aid.” But even in your own explanation, a third group shows up: people who confidently misunderstand the tech while criticizing others for doing the same.
For example, GPT doesn’t browse the internet in real time unless it’s using a specific tool for that purpose. Most of the time, it generates responses based on patterns learned during training. It’s not compiling sources; it’s producing statistically likely continuations. So if we’re going to debate what GPT is or isn’t, it helps to start with technical accuracy.
There’s a deeper issue here too: the question of sentience. You say GPT isn’t sentient as if that’s a solved problem. But we don’t currently have a clear, consistent definition of sentience that checks all the boxes. It needs to include all humans, even those who can’t communicate; exclude obvious non-sentient things like trees or bacteria; and still work if we encounter something alien or artificial. In the absence of such a definition, people will continue to simply claim it is or isn’t - without actually stating what it is or isn’t. This is inherently falsifiable, hence the debate - we’re not debating sentience without a definition; we’re debating how people feel.
One person feels like ChatGPT is sentient based on talking to it. Another person feels like it isn’t - based on technical information. I’m not convinced that something as subjective as a random person’s feelings are a reasonable judge of sentience. We need a definition, but for now, we can at least address a few of the common technical criticisms, which may explain why I’m still on the fence.
Many of the usual arguments against AI sentience fall apart under closer inspection:
**Originality**: OP stated it couldn’t make something original. People remix what they’ve learned all the time. We literally ha
reddit
AI Moral Status
1750959596.0
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[{"id":"rdc_mzwbxyq","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},{"id":"rdc_mzwccmt","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"fear"},{"id":"rdc_mzwqggy","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},{"id":"rdc_mzx5215","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},{"id":"rdc_mzxjaio","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"})