Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
MAD is more or less a concept humans have decided on to feel better about wielding doomsday devices. It’s never really been tested, and the few times it’s been stressed as a principle (false alarms that seem valid in every way, undocumented test launches close to allies, etc.) humans at the end of the chain made a blatantly illogical choice to ignore all evidence and orders to deny a launch. There’s no way to hardcode MAD without also telling AI to ignore logic and evidence in its decision making process in spite of direct instructions from a human.
reddit AI Jobs 1772032259.0 ♥ 4
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_o7bwsju","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"rdc_o7bwsaa","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"rdc_o7c5v4k","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"rdc_o7bwgdl","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"mixed"}, {"id":"rdc_o7cbdit","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"} ]