Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Ultimately, it comes down to personal taste and individual preferences, but not …
ytc_Ugwf1xz_u…
G
I’ve been following AI since IBM Watson, and even though I freaking loved ChatGP…
ytc_Ugx4XaegJ…
G
I wish there was a way to spotlight this situation a lot earlier. Predictive Pol…
ytc_Ugw07R8d7…
G
CNBC - starting a video with "How AI Is Killing..." kinda defeats any kind of ho…
ytc_Ugw404ZdE…
G
why not feed AI what we want it to think human life is so when it inevitably goe…
ytc_UgwBrm0YN…
G
Yeah, the best AI (and we are still decades or more from "the best") is only as …
ytc_Ugzt_gypv…
G
It's not sentient. They could add a billion more compute units and a trillion mo…
ytr_Ugxca7Pui…
G
I think we need to illegalize a.i movies, this is getting out of hands 💀😭…
ytc_UgyXcXMLY…
Comment
Of course we haven't come close to building an AI that rivals humans in thinking, but that might simply be precisely because we don't have nearly enough raw computing power available. Even the largest artificial neural networks today are only a fraction of a percent as complex as the brain.
Just because it is not possible today, you can't convincingly claim that it will never be possible. In fact I think it is far more realistic to assume that it *is* possible, because unless someone can prove otherwise, it makes the most sense to me that our brains are literally no different in operation and psyche than the mechanical artificial neural networks, only much more complex, and trained through millions of years to be extremely optimized at what it does.
The Chinese Room argument never made sense to me. Just switch out the computer in that scenario with an actual human Chinese speaker, and the argument basically says that the Chinese speaker doesn't understand Chinese. The only thing the thought experiment suggests is that it is impossible to tell the difference between understanding and not understanding, not that it is impossible to understand for AI. And IMO it is irrelevant, if something can do a task perfectly well, then by definition it does understand it, no matter what method it is using.
reddit
AI Moral Status
1518453132.0
♥ 10
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[{"id":"rdc_du4rapj","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"rdc_du4syjf","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"rdc_du5jhx1","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"rdc_du46h2b","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"rdc_du45k3r","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}]