Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
It's not news because people are bad at statistics, you just don't understand why people are upset. You are correct that this is an *extremely* accurate system from a statistical and technological perspective. 99.5% accuracy is quite good. But you're still wrong. The fact remains - the overwhelming majority of people flagged were false positives. This *isn't* an argument that the system is flawed - it's doing what it's designed to do and doing that pretty effectively. It's an argument against sweeping facial recognition based mass surveillance entirely. You're mistaking a moral argument for a technological/statistical one. In fact, what you're saying drives the point home even more: the system is working quite well and doing what it's supposed to do and what the police wanted it to do. Yet in spite of that, 81% of results are false positives. Those are real human beings with rights too. It's a little depressing that you've posted a convincing argument for why any sort of large scale automated mass surveillance is inherently repugnant, only to completely miss the point.
reddit AI Harm Incident 1562213397.0 ♥ 55
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyregulate
Emotionmixed
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_fvx67z5","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"rdc_fvx40us","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_esqiowe","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"rdc_esqzw30","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}, {"id":"rdc_estsssr","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"} ]