Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
No, AI chatbots tend to *talk about using nukes and violence*, because they are trained on discussions from people that like to wildly overreact and also talk about genocide and using nukes etc. If they were instead trained on security reports and international relations as a database, the chatbots would act differently
reddit AI Jobs 1707134172.0 ♥ 10
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_kozx3ui","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"rdc_kp0ib3o","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"rdc_kp0jiap","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"rdc_kp1avx3","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"rdc_kozknuu","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"} ]