Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
That’s a good point. My intent is to say that this is a human problem, not an AI problem. I’m seeing lots of “well maybe the AI is just detecting things we intentionally ignore” in this thread, and that’s not even close to the problem. Let me be clear, the problem is that whoever trained this AI used an poor dataset for the real world, and that’s why it can’t recognize people with darker skin. Whether the decision to train the AI on this particular dataset was made out of malice or ignorance I can’t say, but I think we can agree that choosing a dataset of only non bearded white men and thinking that set would provide any kind of consistency in the real world is an example of systemic racism, as well as just plain stupidity.
reddit AI Harm Incident 1576186868.0 ♥ 3
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_falcq5l","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_famcwsw","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"rdc_falkb8s","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"rdc_falmk21","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_fal20y5","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"resignation"} ]