Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This is a misleading article. I am a police officer and I use facial recognition software. You take a picture of the intended subject and the software shows you suspects it believes are possible matches and ranks them by percentage for how close of a match they are. So for one suspect you will get, say, 30 possible matches, ranging from 30-90%+ match. These are listed as suggestions ranked by likelihood of a match but it is up to the user to manually look through the pictures and determine which, if any, are a match. Calling the 29 other suggestions mismatches is misleading because they were offered in addition to the correct match. In my experience, it will either accurately locate the person if they have a criminal record and only fail to if the person isn't in the database at all. For people complaining about an Orwellian State, in the absence of facial recognition and adequate identification we would have to bring someone to the station for photos and fingerprinting which was a lot more time consuming for the suspect.
reddit AI Harm Incident 1530803462.0 ♥ 8
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[{"id":"rdc_e1u2s2w","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"rdc_e1u1ro5","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},{"id":"rdc_e1ty22v","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},{"id":"rdc_e1ug5vo","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},{"id":"rdc_e1txamo","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"})