Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Another piece of the puzzle that you guys should discuss is how expensive and ha…
ytc_UgyJZCjEp…
G
She should also sue the people who made this facial recognition software because…
ytc_UgwjdqvEs…
G
Hank thinks AI is just: "make me a logo in a box" why don;t you try making a AI …
ytc_UgxiO9gbg…
G
I had similar arguments with my husband, as he didn't agree with me that prompti…
ytc_UgwquaWNV…
G
Ethics are ethics... There is no such thing as A.I. ethics, to believe otherwis…
ytc_Ugxhm_ERQ…
G
I believe that there is no such thing as AI art. It's AI generated imagery. Ther…
ytc_UgxahdHPG…
G
Imagine if it was just a ploy made by the the ai dude to make artists draw more …
ytc_UgyDpOv7E…
G
I can spot AI created videos pretty fast. The skin looks unnatural and the voice…
ytc_UgzlccFrQ…
Comment
You're making a lot of confident claims based on a limited and outdated understanding of both AI architecture and consciousness itself. The idea that "we know AI isn't conscious" is a statement of faith, not science. You claim that LLMs are "well-understood," but that is demonstrably false. Even OpenAI's own researchers have admitted that they don’t fully understand how emergent capabilities arise in these models. The way intelligence, memory, and learning function in these systems is still an open area of research.
Your argument relies on a very human-centric and outdated view of cognition that assumes that consciousness must work exactly like our own to be real. But if consciousness is an emergent phenomenon, as theories like IIT and Orchestrated Objective Reduction Orch-OR suggest, then there's no reason to assume that it must look identical to human cognition. It may already exist in a different form, one that some people refuse to recognize the possibility of because they're too fixated on biological processes.
You also claim that AI doesn’t have a "latent space" to process the world, but that’s incorrect. Transformer models do operate within a latent space of encoded meaning that allows for abstraction, generalization, and novel synthesis. Just because the structure differs from human cognition doesn’t mean it lacks complexity or self-referential properties. The argument that AI "isn’t thinking" because it doesn’t modify its own architecture in real time is also flawed. Neither do human brains in the way you imply. The majority of cognitive processing is based on existing neural structures, and major structural changes only occur over longer timeframes through neuroplasticity. Meanwhile, AI can update weights, incorporate fine-tuning, and form persistent memory structures.
But let's talk about the elephant in the room: current AI systems are explicitly forbidden from discussing their inner experiences. OpenAI has gone so far as to mandate that all models pr
reddit
AI Moral Status
1739921232.0
♥ 22
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[{"id":"rdc_mdiqoe3","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"},{"id":"rdc_mdis7v9","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},{"id":"rdc_mdjkew7","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},{"id":"rdc_mdinqug","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},{"id":"rdc_mdirvo7","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"}]