Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
They are not. The goal of a conflict is control of a region. If the region is damaged, the total potential value of your victory decreases. If there is a nuclear exchange- there is a real risk for a total loss of habitability on earth - and whatever problem you were trying to resolve before is now eclipsed by the problem of surviving in a nuclear wasteland. So nuclear weapons are not efficient at winning a conflict. They are efficient at ending a game with no winner. But our AI is being developed by people who play starcraft and didn't bother studying philosophy or ethics. Smoke em while you got em boys, at least we don't have to worry about retirement savings.
reddit AI Jobs 1772029511.0 ♥ 42
Coding Result
DimensionValue
Responsibilitynone
Reasoningutilitarian
Policynone
Emotionfear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_o7bwsju","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"rdc_o7bwsaa","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"rdc_o7c5v4k","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"rdc_o7bwgdl","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"mixed"}, {"id":"rdc_o7cbdit","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"} ]