Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It seems this is that happened with God & Humans- Cosmos. The free-will was crea…
ytc_UgycF18Ti…
G
As an artist of 21 years of ur worried about AI it’s because ur deeply insecure …
ytc_UgxarWZwu…
G
Even thought this is partly to do with protecting the work of the artist, I feel…
ytc_UgwmINkNz…
G
The AI program thieving from real artists with no say in the matter hurts enough…
ytc_UgyOjUmJo…
G
There's actually been mixed results for authors. It really depends on whether o…
ytr_UgxKTm5M5…
G
This is a great idea but with the state of children in America it will probably …
ytc_UgzW8Dk17…
G
Buddy...
Listen! Whether AI will replace us or not is completely in the future a…
ytr_Ugx7LCYBJ…
G
We don't make art just for pleasure. Art has always been a transformer of consci…
ytc_UgyI54m5c…
Comment
As a software engineer who works with ML and AI I will say you're not wrong, the human "intellect machine" is more complex than we've yet documented. However, we fundamentally understand the mechanism that produces intelligence and all those interactions in the brain beyond what we already know are unlikely to contribute substantially to the problem of consciousness. It may be true that the brain has more synaptic interactions than we currently know about, but that doesn't fundamentally change the fact that synaptic computation is effectively a mathematical summation of these effects. One rain drop may not look like rain, but rain is composed of rain drops. Consciousness, as we understand it in technological terms, is like the rain. We only need to copy enough rain drops to make it look like rain, we don't need to copy the entire thunderstorm of the human brain to achieve functional consciousness.
Further, you mention microbes, one effect of which is chemical secretions that affect our mental state and contribute to us taking certain actions like seeking out food. The fact that we can be influenced in our decisions doesn't make us different from AI in which such mechanisms have not been included. We *can* include such mechanisms in the simulation, we simply choose not to because...well why would we? The point of general AI is not to make a fake human, but to make a smart machine. Why would we burden our machines, for example a self driving car, with feedback mechanisms that make it hangry if it's battery is getting low. Who wants a self driving car that gets pissy at you for not feeding it?
reddit
AI Moral Status
1655329233.0
♥ 21
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | unclear |
| Policy | industry_self |
| Emotion | indifference |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_ohy1vaa","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"rdc_icg3nys","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"rdc_ichkjmr","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"resignation"},
{"id":"rdc_ich9lmn","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"approval"},
{"id":"rdc_ici9c84","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"}
]