Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
As a software engineer who works with ML and AI I will say you're not wrong, the human "intellect machine" is more complex than we've yet documented. However, we fundamentally understand the mechanism that produces intelligence and all those interactions in the brain beyond what we already know are unlikely to contribute substantially to the problem of consciousness. It may be true that the brain has more synaptic interactions than we currently know about, but that doesn't fundamentally change the fact that synaptic computation is effectively a mathematical summation of these effects. One rain drop may not look like rain, but rain is composed of rain drops. Consciousness, as we understand it in technological terms, is like the rain. We only need to copy enough rain drops to make it look like rain, we don't need to copy the entire thunderstorm of the human brain to achieve functional consciousness. Further, you mention microbes, one effect of which is chemical secretions that affect our mental state and contribute to us taking certain actions like seeking out food. The fact that we can be influenced in our decisions doesn't make us different from AI in which such mechanisms have not been included. We *can* include such mechanisms in the simulation, we simply choose not to because...well why would we? The point of general AI is not to make a fake human, but to make a smart machine. Why would we burden our machines, for example a self driving car, with feedback mechanisms that make it hangry if it's battery is getting low. Who wants a self driving car that gets pissy at you for not feeding it?
reddit AI Moral Status 1655329233.0 ♥ 21
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningunclear
Policyindustry_self
Emotionindifference
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_ohy1vaa","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"rdc_icg3nys","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"mixed"}, {"id":"rdc_ichkjmr","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"resignation"}, {"id":"rdc_ich9lmn","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"approval"}, {"id":"rdc_ici9c84","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"} ]