Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
If I made a robot, gave it some kind of "learning AI" so it could learn on its own, it still wouldn't have emotions. If I gave that same robot an upgrade with some kind of "Artificial Emotions", it's still a machine that only has the learning ai and artificial emotions I gave it. Even if it "decided" that humans should be killed, it doesn't change the fact that it's still made of metal, plastic, or whatever else it has. Sure, it can "make" decisions, but does it do it "consciously"? I don't think so. It's just a bunch of parts I put together along with a bunch of lines of code to enable it to "make" decisions and "feel". Is it possible to give "consciousness" to an object? Just because an object reacts doesn't mean it's conscious. Look at those sensors that get triggered when somebody approaches nearby. The sensors will probably open doors, turn on the lights, or do whatever else it was designed to do, that's how it reacts, but that doesn't make the sensors alive. I think this treatment of robots as if they're capable of making harmful decisions is something that could be maliciously abused. Just imagine. Let's say I provided a little murderous code in the AI. Not blatantly straightforward murder code, but something that "suggests" to the AI that murdering a specific person is something it should do. Let's say the AI takes the bait, builds the actual murder code for itself, which then leads to it designing to kill that specific someone I wanted to get killed. The robot succeeds and all I have to do is pay some kind of robo-insurance to get away with the crime because "the robot decided to kill somebody on its own". Anyway I don't see anything wrong treating robots with kindness. It's like with the dolls and toys we own. We love our stuff and when we're kids, sometimes we treat these objects as if they're real and as if they have thoughts and feelings. I think that's pretty normal, but when it comes to serious things such as robots murdering people or doing bad
reddit AI Moral Status 1524969035.0 ♥ 2
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyunclear
Emotionunclear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_dy4e3bg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"unclear"}, {"id":"rdc_dy4ftoz","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"unclear"}, {"id":"rdc_dy4phxw","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"indifference"}, {"id":"rdc_dy54eq6","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"rdc_dy57k0p","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"unclear"} ]