Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
A system with a racial bias at its core breeds a racist system. No one has to intend to make some sort of klansman AI, but creating something that will likely be used on ALL people yet only works for a specific group is not only exploitable but has the exploits cooked right in. This is a prime example of systemic racism. Like the article explains, the real results of the system means that since it can’t tell black and brown people apart, civilians are targeted as al qaeda members or minorities go to jail/lose welfare/denied jobs because “infallible” computers misidentified them as violent felon. Shrugging it off as a mere bias because the development team simply forgot black people existed is highly irresponsible.
reddit AI Harm Incident 1576185791.0 ♥ 2
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyregulate
Emotionoutrage
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_falcq5l","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_famcwsw","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"rdc_falkb8s","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"rdc_falmk21","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_fal20y5","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"resignation"} ]