Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Actually, algorithmic bias is undesirable but expected due to the training data set being biased. We actually have to counter bias the algorithms on purpose, which you could itself consider as a purposeful bias. For instance, face recognition algorithms will detect monkey faces as humans much more than they should because it's less of a PR nightmare to identify some monkeys as human, than to identify some humans as monkeys. An other classic one is "slur detection" which was identifying sentences containing "jews" as likely offensive because it comes up in a tremendous amount of racist sentences and rarely in more neutral sentences. How do we fix this? We add it a hard coded list of safe words. It's pervasive in the domain. Do not trust algorithm to be fair. I'm okay with that article, because even though it paint the issue with maleficent intent, at least it push people to doubt/criticize the fairness of such algorithm.
reddit AI Harm Incident 1625871957.0 ♥ 13
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyliability
Emotionresignation
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_h4no8ba","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"rdc_h4n89ab","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"resignation"}, {"id":"rdc_h4o5p8d","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_h4nff7q","responsibility":"author","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"rdc_h4oq2na","responsibility":"author","reasoning":"virtue","policy":"unclear","emotion":"outrage"} ]