Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Or perhaps the moral ideal is to minimize the presence of war globally. Which becomes more difficult when the most powerful nations in existence (the only ones economically fit to create a robot army) lose the only reason that most of their citizens would be hesitant to start a war over (the loss of your own forces lives). You also seemed to have forgotten that killing the other guy more efficiently is still killing. So unless your some kind of monster that assigns human life a value, based on national borders. Then of course you'd desire a world with less killing.
reddit AI Moral Status 1579446811.0 ♥ 1
Coding Result
DimensionValue
Responsibilitygovernment
Reasoningutilitarian
Policyregulate
Emotionfear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_ff258fo","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"rdc_feyqywx","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_feykkoh","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"rdc_feyymje","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"rdc_kig9tu5","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"} ]