Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Any society has biases and cultural norms that make it harder for some groups to thrive, and that will inevitably push them into a cycle of crime. These things aren't usually represented in crime statistics that we feed these algorithms. So it's possible that these algorithms can end up automating systemic oppression against marginalized groups that are driven to crime when more humane intervention like welfare or job training could've prevented the issue. These are things that need to be taken into account, especially since these algorithms will inevitably be used in a minority report type scenario. That's the only reason to predict an individuals likelihood committing a crime in the future.
reddit AI Harm Incident 1555778100.0 ♥ 2
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyregulate
Emotionoutrage
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_elcx159","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"rdc_ectxdpx","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"rdc_ohwe0mu","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"indifference"}, {"id":"rdc_gg0hjf6","responsibility":"company","reasoning":"mixed","policy":"ban","emotion":"outrage"}, {"id":"rdc_ohscfs4","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"} ]