Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
AI is an incredibly useful tool, but that's all it should ever be, a tool. having AI be the final decision maker is an awful idea. for example, if you gave an AI all the data on a conflict you currently possess, and ask it how to ensure your side's victory, there are two immediately evident possibilities. 1. disengage from the conflict/surrender since you technically don't lose, you just don't win. 2. violate the Geneva conventions. kill countless civilians, use false-flags, and resort to bioweapons. these two options, unless specifically ruled out, will almost certainly be the outcome, in option 1 you remove yourself from the conflict at the cost of your own authority, and in option 2, you ensure your enemy can't survive even if they win.
reddit AI Responsibility 1648700230.0 ♥ 3
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyliability
Emotionfear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_i2sd5ts","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"rdc_i2u4e7o","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"rdc_i2s40i9","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_i2t3ztm","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"rdc_i2s307w","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"} ]