Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The unfortunate possibility is that they might just be doing it because that is what they’re trained on- media, literature, etc that all features nuclear weapons usage is highly speculative and usually dramatized.  The result is that a lot of these LLMs will use nuclear weapons because they’re absorbing information that has all sorts of dubious notions about the inevitability of nuclear war. It’s not too different than asking an LLM if you should save on gas money by taking a bus to go to the car wash- quite a few of these models will see nothing wrong with taking a bus to get yourself to the car wash. LLMs are useful for questions that are technical and have a right and wrong answer- they are not useful for tasks that are open ended thought experiments which require insight into human psychology.
reddit AI Jobs 1772190204.0 ♥ 28
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningmixed
Policyunclear
Emotionfear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_o7ohrwj","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_o7ojuwr","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"fear"}, {"id":"rdc_o7ojynh","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_o7qliov","responsibility":"developer","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"rdc_o7pl6a6","responsibility":"developer","reasoning":"unclear","policy":"none","emotion":"indifference"} ]