Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Every instance of a false arrest related to facial recognition software that I've heard of has been a Black person. This is at least the 2nd Black person falsely implicated by this technology and subsequently arrested in Detroit since 2020 when Robert Williams was erroneously thrown in jail. Mr. Williams has also filed suit, btw. And James Craig, Detroit Chief of Police at the time Mr. Williams was falsely arrested, admitted that the technology doesn't work. “If we would use the software only [to identify subjects], we would not solve the case 95-97 percent of the time,” Craig said. “That’s if we relied totally on the software, which would be against our current policy … If we were just to use the technology by itself, to identify someone, I would say 96 percent of the time it would misidentify. Wtf, Detroit PD? Why are you still relying on this shyt that you know doesn't work? There are even federal studies that have shown that facial recognition systems misidentify Asian and Black people up to 100 times more often than white people. Why is, DPD, or any so-called law enforcement agency even allowed to use such technology when it obviously doesn’t work? Undoubtedly, if facial recognition had such a high error rate with white faces, then there would be far more than just a handful of municipalities that have banned it's use. Hell, there would probably be federal legislation to establish strict guidelines by now.
youtube AI Harm Incident 2023-08-08T18:5…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningdeontological
Policyliability
Emotionoutrage
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyZP9a94Ad0wCjUJB94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxWR8oQC80pLV-Qh3l4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwzPNJTd_wN5s1rrdl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxPxYlJeicDnQ8_vtB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxtNWW_GZxiM9Bqq-l4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxEaIDoNIMcd2f1rzF4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwYa1-HAHTLpdMk4U54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzULOv3rWOdF20TfzZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"approval"}, {"id":"ytc_UgzMGRuFdi8QGkRDJSp4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"indifference"}, {"id":"ytc_Ugw1BLcit247NP8ta994AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"outrage"} ]