Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
It sounds like for the sickness one at least, the ai wasn’t looking at who was more sick, it was looking at who LOOKED more sick, and darker skin does not help it recognize visual symptoms, so the machine needs to measure sickness in less visual ways in order to remove this bias. As for the police one, the data sets are not biased, black people do in fact make up a larger portion of shootings in urban areas, however using an ai to predict who is at risk to preform a crime is just stupid, because the appearance of the criminal has little to no relation to their preforming the crime, it more has to do with the person themselves, who they spend time around, what area they come from, and their willingness to do “dishonorable” or “dishonest” things. Someone in a gang is 100% more likely to end up in gang violence, Ie what cops did before on determining who they should watch is fine. Basically what information you feed an AI in such applications is very important, and you need to be very careful. I think Ai would work wonderfully as a search engine to find the location, or likely location of suspects that people have identified, or use it to identify patterns in areas that certain crimes are committed, figure out why it would be comitted there, and fix the problem, or have an officer patrol that area more often. Most of those aplications are already done by the police, only difference is that the Ai would do it quicker. No police Ai should be connected to the internet though, it should only use police databases, as those cant be corrupted with biased data as easily as the internet.
youtube AI Bias 2023-11-02T00:3…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policyindustry_self
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgzV9n3NUpf6jhoGY-J4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwxHzeYa3QaVYAszIF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"}, {"id":"ytc_Ugw_6dFPZaGHujKebTt4AaABAg","responsibility":"unclear","reasoning":"contractualist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxyS4atjxksa1jZS4x4AaABAg","responsibility":"society","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzPzz5anUuUq_JEsQl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxDJmlX5DA2xe5p3Ph4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxGG35u1I4yykFKNzl4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwHSJZ5PRMWTjsqbpd4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxJSZt2iuql78wnlD54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgyGQp6L9azYriR3hbV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"resignation"} ]