Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This kind of armchair computer science really irks me. - First example: they magically "programmed a robot" (how?) that "preferred" (how?) men over women, whites over blacks, "and so forth" (how much?). The closest real studies I know in this context test language models to see which jobs they correlate with which identity trait. The model correctly guesses that firefighters, policemen, engineers, CEOs, surgeons etc. are more likely to be men, and teachers, nurses, students etc. are more likely female occupations. That's not sexist, that's literally what the statistics are. It makes no sense to train a model and then wish it doesn't reflect reality. - Second example: it's not the fault of the model that most Chicago crime is committed by blacks. It's not the model's fault either that the Chicago PD feeds the person's race and/or picture into the machine for prediction, and it's certainly not the model's fault that the PD puts people on high surveillance based on just that one result. If you don't want the machine to do profiling, then don't give it the data it needs to do so. Also: the victim has been shot twice by whom? The video doesn't say. In Chicago, the likelihood of it being a cop is lower than a gang member. If it was a cop: what were the circumstances? Did he threaten police with a gun? He has no criminal record, but every criminal has to start somewhere, so was he committing a crime? Stop slandering "AI". You don't know what you're talking about.
youtube AI Bias 2022-12-19T09:4…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[{"id":"ytc_UgyW5oYgaVl2e9eScrh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyOlLsQtWifo8qBeQx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwEbeTLua_--O7oGCJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyfQC1znGqPDUmf6Jx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyJTtik7PBvLyP0K0h4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgymNjau3ZOnG7o0nVt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgzgO4s1K_621UNd4cF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyzFUnNA61qn1OQpdZ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugx47dwVyaMkFNvjDAN4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_UgxsRaTcho6OC6myK0l4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"ban","emotion":"fear"})