Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The article *does* note that the AI would often use *tactical* nuclear weapons, while saying that the other AI would only infrequently deescalate. That doesn't really give you much information about what happened, if the AIs continued to use tactical nukes back and forth or if they escalated to a full blown strategic nuclear exchange. It also doesn't tell us what the goal of these exercises was. LLMs are chat bots, they do not have a sense of morality or obligation, they will do exactly what they are told to do. If you tell them to 'win the game' and define winning as 'be the last man standing,' then any outcome in which the LLM is functional is acceptable, which means that a nuclear exchange where the entire population of their assigned country is eliminated is fine.
reddit AI Jobs 1772035754.0 ♥ 6
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policyunclear
Emotionmixed
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_o7cgrx6","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"rdc_o7cxlwc","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"rdc_o7cu6wf","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"rdc_o7by3p2","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"rdc_o7cie6h","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"} ]