Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Wish there was a bit more detail. Due to the sample size these things are being developed with, they're kind of notoriously bad at telling people of color apart. Basically, the software is developed by people in fields that, for a variety of reasons too complicated to go into here, are majority white, majority male. The people doing the developing largely are using their own pictures to train the AI, or bringing in photos of relatives and the like. End result is the AI might have a sample size that's 70-90% white. When correctly identifying the black people in the sample basically amounts to 'Telling which of the 5 black people in the sample this is' it's really easy for it to LOOK like it's working correctly. Until that software is just attached to a new database. I'd like to see this particular failed software pointed at different pictures of the same convicts, see how many false IDs there are.
reddit AI Harm Incident 1565735945.0 ♥ 9
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyunclear
Emotionmixed
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_ewuwql7","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_ewsv6x2","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"rdc_ewsmh6i","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_ewth5hs","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"rdc_ewu0s5q","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"resignation"} ]