Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Ok, here is my PRO-AI comment: not every drawing is The Art, not every art is a …
ytc_UgzlQdlbd…
G
AI should be way more restricted, and we have to be very careful bc one day the …
ytc_UgyHm9HfS…
G
On the part of Can AI Replicate Human Uniqueness, this is actually what is alrea…
ytc_UgyBFRcto…
G
Well as a programmer I am not a fan of AI in general its honestly a big cop out …
ytc_UgzKqVpO3…
G
I want to say I’m not an Elon Hater but knowledge should never be suppressed…. A…
ytc_UgyCrbs1v…
G
Well, @WilliamThompson-hv1fvname, this robot is programmed with a secret move ca…
ytr_UgwjmdBOE…
G
Ai slop is the best petition to make an ai slopper voice Diddy and Epstein…
ytc_Ugw2WL7zV…
G
Ppl underestimate how insane a robot will be with basic hand to hand combat; it …
ytc_UgxD-Qw5N…
Comment
> So black people didn't reoffend at a higher rate, yet the AI still developed a bias? Am I reading you right?
No, I don't think that's the right reading. The problem wasn't about differences in reoffense rates, it was about differences in the algorithm's error rates. For example, the AI wrongly predicted that black people would reoffend way more often than it wrongly predicted that white people would reoffend, even after controlling for other relevant data like history of criminal activity and history of criminal recidivism. The AI was also almost twice as likely to wrongly guess that white people would *not* reoffend as to wrongly guess that black people would not reoffend.
Here are all the sources, if you're interested.
[The original ProPublica article (May 2016).](https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing)
[The explanation and justification of their calculations (May 2016).](https://www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm)
[A Github repo containing all their data and calculations (May 2016).](https://github.com/propublica/compas-analysis)
[Northpointe's response, arguing that their algorithm is actually fair (July 2016).](https://www.documentcloud.org/documents/2998391-ProPublica-Commentary-Final-070616.html)
[ProPublica's nontechnical response to Northpointe's response (July 2016).](https://www.propublica.org/article/propublica-responds-to-companys-critique-of-machine-bias-story)
[ProPublica's technical response to Northpointe's response (July 2016).](https://www.propublica.org/article/technical-response-to-northpointe)
[A Federal Probation Journal article arguing against Propublica's results (September 2016).](http://www.uscourts.gov/federal-probation-journal/2016/09/false-positives-false-negatives-and-false-analyses-rejoinder)
[ProPublica's annotations to that paper, arguing their case (September 2016).](https://www.documentcloud.org/documents/3248777-Lowenk
reddit
Cross-Cultural
1539187271.0
♥ 145
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_e7jkpus","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"unclear"},
{"id":"rdc_e7j1brn","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"rdc_e7ipl28","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"unclear"},
{"id":"rdc_e7ipybi","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"unclear"},
{"id":"rdc_e7j1qhk","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"unclear"}
]