Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
>I thought the US had a policy that a weapon system can never make a kill decision without a human confirming it first. Well a human confirms the drone strikes and innocent lives are lost sometimes. I could see an autonomous robot being confirmed for entry into a building and any innocent lives lost are going to be treated the same way. Granted drones are still manually flown, but if they were autonomous its not like it would make much of a difference. Drop Bomb Y/N. Hopefully an A.I. with a rifle equivalent, leads to less collateral damage not more.
reddit AI Governance 1438004832.0 ♥ 90
Coding Result
DimensionValue
Responsibilitygovernment
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_cjoe5e6","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"rdc_cjoet5d","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"rdc_cjoodqi","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"rdc_cthqsvx","responsibility":"unclear","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"}, {"id":"rdc_cthqdb1","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]