Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The problem with defining consciousness is that no AI will ever match the defini…
ytc_UgxBs7ixk…
G
The most annoying host ever, never shuts up and comments every shit, it’s almost…
ytc_Ugx_bp_1_…
G
Generative ai should ONLY be used to fill in the blanks when you made something …
ytc_UgyIeBrdh…
G
I don't think it's impossible to build a machine superintelligence, but I'm not …
ytc_Ugwc78q1-…
G
Hmm I disagree. As far as human ownership goes, AI is a tool for art and the hum…
ytc_UgxDCZMVu…
G
AI + quantum computing = technology evolving at a pace that will inspire war, di…
ytc_Ugy-jdDDa…
G
@Bingo_Bazingo im talking purely about "bad for environment" claim, all "Anti AI…
ytr_UgzdxgJI_…
G
being trained to be able to figure out how to lie on its own ambition is kinda a…
ytc_UgwQ-f8dp…
Comment
People overestimate their own ability for reason and comprehension just because humans are the best at it, as far as we know. People do stupid things all the time, just different sorts of stupid things. How many people really understand even basic newtonian physics rather than just associating certain things with certain formulas and referencing some stored facts?
The reason and understanding organ is based on neuron architecture originally used to coordinate multi-cellular organisms and regulate muscle spasms. We don't natively do arithmetic, we train neurons to perform a function like arithmetic. It works evolutionary because it's based on something that came before, and it's an adaptable design capable of evolving into more things, but there's no good reason to think it's actually the optimal design, or that the average human brain is even locally optimal, given Einstein's human brain is a lot better than yours.
When you think hard, you think in terms of language and word/symbol association. There is a language to logic and reason, and when you formalize it into language, you can do these language model behaviors in your head, and understand it better. It's not even a novel idea. Philosophers, particularly logicians and linguistic philosophers have been pondering these things for millennia.
ChatGPT is obviously not the AI that will do all of this, but too many people fall into the trap of Chinese Box thinking, trying to distance AI from human thought, especially AI scientists. They're constantly worried that certain indicators of intelligence will imply a different kind of human intelligence. The real issue is humans think they're smarter than they are when humans are really just not that smart. They're only relatively smart. Humans think because they're the smartest animal, and because the way the brain has evolved by adding lobes, intelligence is a linear process with a hierarchy of intelligence, rather than there being different kinds of processing availabl
reddit
AI Moral Status
1674690856.0
♥ 45
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_j5x1j9o","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"rdc_j5yrmfo","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"rdc_j5z9w2u","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"rdc_j5zf66s","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"rdc_j5w5u9g","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}
]