Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Whether a decision maker or an advisor, how would we ever test how well an AI general works? In order to truly determine its potency and efficacy, we’d need to compare near-identical situations, one where the AI:s advice was followed precisely or the AI was allowed to make the decisions itself, and one where humans were left completely in charge. And different situations as well, and the world (fortunately!) doesn’t conduct enough real military operations to gain a large enough sample. And a problem with humans inputting the data is that we’re building bias into the AI. ”These are the parameters we consider important, what’s our best course of action?” The whole point ought to be for the AI to identify parameters that *are* relevant but humans fail to take into consideration, thus seeing possible developments and opportunities that we do not. So in order for it to work, I think that it’d have to be a very independent AI, with access to everything from news articles to weather data to infrastructure plans to classified military information.
reddit AI Responsibility 1648762176.0 ♥ 2
Coding Result
DimensionValue
Responsibilityunclear
Reasoningconsequentialist
Policyunclear
Emotionindifference
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_i2wwksr","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"rdc_i2whbf1","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"rdc_i2utbur","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"rdc_i2v5jm5","responsibility":"user","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"rdc_i2rwc60","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"} ]