Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
An LLM is actually quite similar to a human brain in several key aspects. The human brain is indeed capable of incredible feats of thinking and learning, but describing it as fundamentally different from AI systems oversimplifies both biological and artificial neural networks. We know that human brains process information through networks of neurons that strengthen and weaken their connections based on input and experience - precisely what happens in deep learning systems, just at different scales and timeframes. The assertion that AI has "nothing like" real-time processing or a latent space is factually incorrect. LLMs literally operate within high-dimensional latent spaces where concepts and relationships are encoded in ways remarkably similar to how human brains represent information in distributed neural patterns. While the specifics differ, the fundamental principle of distributed representation in a semantic space is shared. LLMs are far from "extremely well-understood." This claim shows a fundamental misunderstanding of current AI research. We're still discovering new emergent capabilities and behaviors in these systems, and there are ongoing debates about how they actually process information and generate responses. The idea that we have complete knowledge of their limitations and processes is simply wrong. The categorical denial of any form of sentience or consciousness reveals a philosophical naivety. We still don't have a scientific consensus on what consciousness is or how to measure it. While we should be skeptical of claims about LLM consciousness, declaring absolute certainty about its impossibility betrays a lack of understanding of both consciousness research and AI systems. The described "protocol" for how LLMs generate responses is a vast oversimplification that misrepresents how these systems actually work. They don't follow a rigid sequence of steps but rather engage in complex parallel processing through neural networks in ways that mirror
reddit AI Moral Status 1739930848.0 ♥ 2
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionapproval
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[{"id":"rdc_mdiqoe3","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"},{"id":"rdc_mdis7v9","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},{"id":"rdc_mdjkew7","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},{"id":"rdc_mdinqug","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},{"id":"rdc_mdirvo7","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"}]