Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
International agreements don't mean shit for this kind of tech. There's a tiny chance that everyone cooperates. However, there's a much greater likelihood that somebody doesn't cooperate. The danger of being caught short is also immense. So, the risk (likelihood of running into autonomous combat drones x danger) encourages everybody to build them. It's suicide not to. In fact, the dream scenario is to reap the benefits of signing an agreement without abiding by it. If you're a big country you can keep your rule breaking secret, you can demand transparency from small countries (neutralising them and building their dependency on you), and you can always hope some countries are naively optimistic and don't build weapons anyway. We already have AI F-16s. https://www.thedrive.com/the-war-zone/39899/darpa-now-has-ai-controlled-f-16s-working-as-a-team-in-virtual-dogfights When we have full self driving vehicles, do you think that won't be applied to submarines, ships, tanks, and jets? Of course it will. Once it is, why would you want humans on the field as basic foot soldiers? EDIT Not to mention, unless you discover transgression very early, how do you enforce the rule once a country breaks it? Imagine China (or the US) breaks the agreement. How do you punish them? You basically can't - they can go to war at almost no cost to themselves (or far less of a cost of they use people and machines). In the absence of your own robots, the only major recourse is an even bigger threat: nukes.
reddit AI Moral Status 1616677532.0 ♥ 55
Coding Result
DimensionValue
Responsibilitynone
Reasoningutilitarian
Policynone
Emotionresignation
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_gs5xs8e","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_gs5ufqy","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"rdc_gs5zczy","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"rdc_gs6ba54","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"rdc_gs5whnt","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"} ]