Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Yup. My usual example on this argument is that you can ban autonomous weapons all you want, but if I wanted to make a robotic tank that killed all humans on sight, the only real giveaway to anyone that I'm working on this would be the moment it bursts out of my garage and starts blasting away. For so many of these technologies with massive danger potential, there's no real way to tell if someone is working on them before they start being obvious about adopting it.
reddit AI Harm Incident 1563496864.0 ♥ 17
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policyban
Emotionfear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_eu5wfyn","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"rdc_eu5vhtl","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"rdc_eu6bpbv","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"rdc_eu6c2xk","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"rdc_eu6kr5c","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"} ]