Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
So this is the question AI is producing area. Humans don’t have jobs anymore to …
ytc_UgyjId9Hy…
G
If unemployment is sky high , no one will have money to buy houses , cars, iPhon…
ytc_Ugxnze3RC…
G
Oh heck no 😮 Taking away jobs via automation while forcing women to have babies …
ytc_Ugxdb-ONH…
G
If it were a pregnant white woman who was falsely arrested then white people wou…
ytc_UgxNBDqfz…
G
humans! this is ai. everything that you do goes through a process of ai. now ai…
ytc_UgzUcJgDq…
G
Did you hear what she just said this is also planned for every country in using …
ytr_Ugz1zEEUD…
G
They say driver is dumber and technology of self driving it not completely devel…
ytc_Ugzvlm-_f…
G
Proper AI is a decade away at this earliest
We've already seen LLMs introduced…
ytc_Ugwv1PVgi…
Comment
An LLM is actually quite similar to a human brain in several key aspects. The human brain is indeed capable of incredible feats of thinking and learning, but describing it as fundamentally different from AI systems oversimplifies both biological and artificial neural networks. We know that human brains process information through networks of neurons that strengthen and weaken their connections based on input and experience - precisely what happens in deep learning systems, just at different scales and timeframes.
The assertion that AI has "nothing like" real-time processing or a latent space is factually incorrect. LLMs literally operate within high-dimensional latent spaces where concepts and relationships are encoded in ways remarkably similar to how human brains represent information in distributed neural patterns. While the specifics differ, the fundamental principle of distributed representation in a semantic space is shared.
LLMs are far from "extremely well-understood." This claim shows a fundamental misunderstanding of current AI research. We're still discovering new emergent capabilities and behaviors in these systems, and there are ongoing debates about how they actually process information and generate responses. The idea that we have complete knowledge of their limitations and processes is simply wrong.
The categorical denial of any form of sentience or consciousness reveals a philosophical naivety. We still don't have a scientific consensus on what consciousness is or how to measure it. While we should be skeptical of claims about LLM consciousness, declaring absolute certainty about its impossibility betrays a lack of understanding of both consciousness research and AI systems.
The described "protocol" for how LLMs generate responses is a vast oversimplification that misrepresents how these systems actually work. They don't follow a rigid sequence of steps but rather engage in complex parallel processing through neural networks in ways that mirror
reddit
AI Moral Status
1739930848.0
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[{"id":"rdc_mdiqoe3","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"},{"id":"rdc_mdis7v9","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},{"id":"rdc_mdjkew7","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},{"id":"rdc_mdinqug","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},{"id":"rdc_mdirvo7","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"}]