Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
What kind of AI are we talking about here? Oh i see that it is LLMs... then it isn't strange for the AIs in question to not have an actual understanding about such things, they are language prediction machines, thats about it. It makes me wonder more what the prompts and constrictions (or lack thereof) on them were. Just make it a hard rule for it to not ever contemplate being the first to make a nuclear strike, and the AI should stop recommending it. And even then it is insanity to use a language prediction model to do tactical or strategic choices. The only real use they would have in this scenario is to digest protocols of war and past incidents and offer summery of those, and explain how it may be similar to a current scenario. TLDR; it is user error, both for not giving proper guidelines to the LLM... and for using an LLM to do a task that is an ill fit for its architecture in the first place.
reddit AI Jobs 1772036659.0 ♥ 3
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningmixed
Policyregulate
Emotionmixed
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_o7dnhgm","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"rdc_o7c8r49","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"rdc_o7cbon6","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"rdc_o7cdlq6","responsibility":"government","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"rdc_o7clpsq","responsibility":"developer","reasoning":"mixed","policy":"regulate","emotion":"mixed"} ]