Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The output of a chatbot depends on its training data, yes? So if it's been trained on a load of apocalyptic fiction written by humans about nuclear wars then yes we can expect that kind of output. I don't think anybody is seriously considering giving a chatbot control of any military actions. The LLMs being proposed are to assist humans, not make decisions, and would surely be trained on sensible data not the braindead opinions of millions of Internet users. Looks at news subs here and you'll see loads of nutters who actually seem to want a nuclear holocaust, so it's no surprise ChatGPT's output shows a similar bias.
reddit AI Jobs 1707125214.0 ♥ 1
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyindustry_self
Emotionapproval
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_kozwlo9","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"rdc_kozxph5","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_kozz461","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"rdc_kp00ore","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"rdc_kp05xbx","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"} ]