Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
A lot of people commenting on this don't quite seem to understand his point. For those of you saying "of course it predicts bad because it only has two classifiers" - he was proving that the algorithms that the courts are using are in comparison to the results that a two classifier algorithm would give you. Which is NOT good and should NOT be deciding whether or not someone should be put in jail. For those of you saying "the algorithm is proving that blacks commit more crimes" this is also simply not the case. He is saying that the algorithm predicted that more blacks would reoffend when they actually didn't. And that more whites would not reoffend when they actually did. It has nothing to do with who is committing more crimes. It has something to do with how the algorithm is classifying REAL life people and making decisions about people's lives and race ends up playing a huge roll in this. This research could say a lot about our criminal justice system, but it should really worry you that these algorithms are being deployed without actually KNOWING why it is getting the results it is getting. The courts that are using this probably had no idea that this was actually happening, because a human mindset is " well if a computer believes it will happen then I agree with the computer." The research done here should open up the community to challenge these AI's and their abilities. I do believe that technology is powerful and can solve many many things than us as humans cannot. But i do believe that if these algorithms are being deployed in the real world, than they need to have concrete evidence of their capabilities and should have proof that they are helping us and not ultimately hurting us. Awesome talk.
youtube AI Harm Incident 2019-04-08T19:3… ♥ 2
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyregulate
Emotionapproval
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgzvhwFvB7oi-_lJH7p4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgxRsW6_dK9mZHrDGgF4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugy2g28mjkxLDtGgBq54AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyWo6ork0Nq7swGDrB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"}, {"id":"ytc_UgzqYe3stj_MCdc8Ont4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugwcuweh4_Ouew_cU3t4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"approval"}, {"id":"ytc_UgxplwNJPzzXQK_DpcJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxVnjAzQGbX110Qe0F4AaABAg","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugw9kDZTVOXQyiyQA_l4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgyocqvY_rib37g6ROd4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"} ]