Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
>It's time to admit that either the technology doesn't work or that it's too fundamentally flawed to use on non-white people Neither of them are true. Recognizing faces is a good application of machine learning models *but* training sets are very often a lot worse at non-white people in the US. What they don't and probably never will produce is 100% certainty of a match and they don't need to either. They are just tools and should only offer suggestions which is what they do. Humans still have to make the decision to act on it and can easily check if they agree with the model in the case of facial recognition. > It's just more junk science like lie-detectors. Those are literally 100% junk so no, not true.
reddit AI Harm Incident 1691424292.0
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policynone
Emotionapproval
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_jv68y4n","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"rdc_jv6d7de","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"rdc_jv5p3yb","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_jv6b6es","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"rdc_jv5ychu","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"} ]