Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Just reading that Bosch just today opened their chip plants in Germany.
"i woul…
rdc_h17u4wz
G
flip flops and dystopian AI savior violence strategy is pretty jarring...even af…
ytc_UgytnVzNe…
G
This is actually funny cause I’m doing a paper on AI technology as we speak 😂…
ytc_UgyJIKCw3…
G
For now, the fear mongering is just marketing. Watching this while getting chatG…
ytc_UgxkxDaFT…
G
soooo...raping a baby is cool, but people in love getting married is not? hrmmm.…
rdc_c33sib3
G
People just expect to automatically master something or not ever be able to do i…
ytc_UgwhCeZnE…
G
I understand the semantic (lack of) nuance you're trying to make. I disagree wit…
rdc_ks8xz7k
G
Chat GPT: A genocide is an attack against a group of people that involves meassu…
ytc_Ugy0NGM0-…
Comment
100%.
This is actually much more thought provoking than any of the questions surrounding artificial intelligence and whether it is or isn’t sentient.
Humans attach special significance to the fact that we’re biological organisms. The fact that the human brain is so difficult to study means we likely know more about the inner workings of AI than we do our own thinking. This doesn’t imply that human thought is more sophisticated, just that we don’t understand it. It could very well be that there are many similarities between AI and humans, given that we design them to extrapolate human-like reasoning.
As you pointed out, it could also be the case that humans are much less complicated that we want to believe. If there is one thing that’s special about the human brain, it’s the level of energy efficiency achieved. It would take massive leaps in efficiency for LLMs to come anywhere close to rivaling the efficiency of our brains.
This is actually the biggest issue with the concept of infinitely scaling AI, and it’s something that CEOs at these companies really aren’t acknowledging because it destroys the hype around their business. Even if we could design super intelligence at the software level by scaling out compute, our energy infrastructure would be unable to handle to the demand placed upon it. The more training and inference required by a model, the more it costs to run a query.
When compared with traditional rules based applications, it’s difficult to imagine how LLMs could ever be cost competitive across all domains. Any advance in compute efficiency at the physical level would probably generalize to CPU compute as well, so it seems unlikely that this cost advantage will ever go away.
reddit
AI Moral Status
1750956638.0
♥ 18
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_mzwu8vj","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"rdc_mzymc5w","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"rdc_n000569","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"rdc_mzw21p8","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"rdc_mzw652j","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}
]