Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
As a general matter, AI-like systems already make life and death decisions in the military, the most obvious being fighter fly-by-wire and self-defence systems. In a modern military (experience may be different in Russia) it's kind of awe-inspiring how much is given over to automated decision-making systems on the basis of policy, process and trained AIs. This can very much impact the lives of both military and civilians. That said, I think current AIs are wholly unsuited for this kind of work at this point. We're only just beginning to understand the consequences of bias in image classification, triage is a much harder problem. I guess that's why DARPA, the Dept of Mad Scientists, are the ones looking at it.
reddit AI Responsibility 1648682989.0 ♥ 3
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_i2sd5ts","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"rdc_i2u4e7o","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"rdc_i2s40i9","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_i2t3ztm","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"rdc_i2s307w","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"} ]