Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
And I would like the human operator to have to go on record to override the machine, at least when the decision is to attack. If the machine says "that's not the guy" and the operator says "no, that's the guy," I would like that on record for when that wasn't the guy. People get tired, moody, burned out. They get ambitious, and want to impress superiors with a gung-ho attitude. And I'd wager machines/software would be better at telling apart bearded Middle Eastern men from bad video than white midwesterners. Mainly because of the role of the [fusiform gyrus](https://en.wikipedia.org/wiki/Fusiform_face_area) in facial recognition, when the faces look different than the ones we were exposed to in our youth when our brains were still forming. That isn't to say that algorithms can't encode the biases of those writing them. I'm saying I trust machine identification of facial, gait, and other types of recognition over a fatigued, angry, ambitious, tired human. You can say "then maybe don't bomb people?" but that's not likely to be a world we ever live in. If you're a complete pacifist, fine, but if not we have to consider whether technology can reduce false positives.
reddit AI Responsibility 1648689998.0 ♥ 3
Coding Result
DimensionValue
Responsibilityuser
Reasoningdeontological
Policyregulate
Emotionoutrage
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_i2s8j5h","responsibility":"distributed","reasoning":"mixed","policy":"industry_self","emotion":"mixed"}, {"id":"rdc_i2smx2p","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"rdc_i2sjcg5","responsibility":"user","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"rdc_i2s4sm4","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"rdc_i2s8p86","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"} ]