Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
“criticism over AI is racist” is a wild take for many reasons, but I would make the sense that AI… is kind of racist (not really but it perpetuates a lot of systematic racism). The computer scientist Joy Buolamwini has done a lot of research into this (please read her book Unmasking AI), but the basic problem is that AI is trained on what she describes as “pale male” data- ie, data is trained on the data of white men (and in general, the data of non-oppressed groups). In some data sets that have been researched, the demographics of the data are around 70% men and 80% light skinned. We choose biased data to feed AI, so the AI repeats biased responses. As of November 2023 (so this could be better now), it was found that AI was more likely to generate lighter skinned people when asked to generate people with high paying jobs and darker skin when asked to generate people who were criminals. They’re also significantly worse at identifying non-white and non-male faces. AI cars are also more likely to hit people with darker skin than lighter skin, because they’re trained on and tested on a majority white data set. again, most studies you find on this are from 2023, so some of this might have changed… except that with how things are now, errors happen first and changes are made later. What about the smaller biases AI have that people won’t notice for years? Can we really great good diverse writing when we’re not training data sets on diverse material? When the material being generate is less objectively analyzable than “does this car hit x group of people more or less or the same”, will we even notice the biases AI is perpetuating until much later? TLDR; Robots are racist we should cancel them
youtube 2024-09-04T23:3…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxC4H6S-NtWeXtYfwR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugyr6zqy30bOnKpzpEF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwU_XqLU9otLnDSKSt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugz8G9ZOfBC7l24qkP94AaABAg","responsibility":"user","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyYGSzQl15DWBj6tqV4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugy-WxcpJ4RAGxLKoiV4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwoUFSv01s7n4r4VZt4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxmeIZV8RjjNEXwMRZ4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugxa87FDx0tQRW7QoU14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugwye4Mdx-6ImlgC9j14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"} ]