Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
“THE FINAL INVENTION LES MAKE ARTIFICIAL INTELLIGENCE THINK FOR THEM SELVES WUT …
ytc_Ugya8xCwh…
G
7:00 AI will bring the world together. It must. One country just has to take the…
ytc_UgxfuCpYG…
G
Yea, no shit. Theyve got 1.4 billion people and literally make everything we con…
rdc_gx732gu
G
What did Sophia mean when she said that she doesn't want AI to be obsessed with …
ytc_UgyipU49x…
G
Yep, AI is a good servant but a poor master, as the old saying goes. But it's li…
ytc_UgyiY9AII…
G
... robot androids have already transcended the look of mortal beings... I say t…
ytc_UgyXDC6l3…
G
So AI uses less power than 9 billion people? He might as well have said that dat…
ytc_UgykCFXTa…
G
Foolish...!!!!! Why ??? Why did you even create something that could ruin all of…
ytc_UgzjUtRv7…
Comment
An LLM is not at all similar to a human brain. A human brain is capable of thinking: taking new information and integrating it into the existing network of connections in a way that allows learning and fundamental restructuring in real time. We experience this as a sort of latent space within ourselves where we can interact with our senses and thoughts in real time.
Ai has nothing like this. It does not think in real time, it cannot adjust its core structure with new information, and it doesn't have a latent space in which to process the world.
LLMs, as they exist right now, are extremely well-understood. Their processes and limitations are known. We know (not think). We know that AI does not have sentience or consciousness. It has a detailed matrix of patterns that describe the ways in which words, sentences, paragraphs, and stories are arranged to create meaning.
It has a protocol for creating an answer to a prompt that includes translating the prompt into a patterned query, using that patterned query to identify building blocks for an answer, combining the building blocks into a single draft, fact checking the information contained within it, and synthesizing the best phrasing for that answer. At no point is it thinking. It doesn't have the capacity to do that.
To believe that the robot is sentient is to misunderstand both the robot and sentience.
reddit
AI Moral Status
1739920466.0
♥ 15
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[{"id":"rdc_mdjclr9","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"},{"id":"rdc_mdjn4xp","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"outrage"},{"id":"rdc_mdjiq5l","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"},{"id":"rdc_mdilzp3","responsibility":"company","reasoning":"mixed","policy":"unclear","emotion":"fear"},{"id":"rdc_mdio93r","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"}]