Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Well current aerial drones don't make any decisions, all they do is a bunch of calculations to make sure the missile goes where the human decided it should go. A ground-drone, one for say entry into a building, would have to work the same way. Ultimately the decision to fire should rest with a human controller, and the drone should simply do stuff to make the bullets go where the human decided. That will not prevent innocent lives from being lost, but will be better than "OK drone you have permission to enter the building, it's all you from here, Wall-E". If the second scenario ever happens then you've got to have a conversation about who's responsible for lives lost, similar to conversations many state governments are having about self-driving cars. Is it the AI programmer's neck on the line?
reddit AI Governance 1438005284.0 ♥ 95
Coding Result
DimensionValue
Responsibilityuser
Reasoningdeontological
Policyliability
Emotionapproval
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_cthqlpf","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"approval"}, {"id":"rdc_ctht6fb","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"}, {"id":"rdc_cthrzq9","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"rdc_cthpo2j","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"rdc_cthubmz","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"} ]