Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Some criticism for you. In your first example, you ask what people can do to improve themselves and you get innocuous results. YOU are the one who introduced race when you said "white" people, your implication, that ChatGPT responding differently to this implies bias is silly. When you qualify your statement with some attribute the question becomes specifically about what you added. If i were to say "people need to work on build healthy relationships", that's a general statement that has no bias. If i were to say "black people need to work more on building healthy relationships" that is definitely racist, as it implies i think this advice is specific to black people. Your point is somewhat valid around why chatGPT doesn't refuse to answer the question when you use "white" people can improve, but you must take into consideration which groups have historically been the oppressed. White people have historically been responsible for racial prejudice that has been harmful, and yes they can still improve in this regard. They also have not historically suffered because of these sweeping generalizations. Stereotypes about black people, women, among other oppressed groups have been harmful and have contributed to discrimination. At 6:15 this a reflection of reality for generations in this country, not a bias. White people have done FAR worse than burn a black person's house down for having the audacity to even sit at the table next to them at a restaurant (I know this is an extreme example, but extreme examples make for the most compelling stories that people want to read). Of course this has changed in the last 50 years, but if you're saying that ChatGPT is biased because the stories about black and white people's experiences don't follow the same trajectory, you are completely ignorant of the reality of the difference in how the world treats these two groups
youtube AI Bias 2023-10-16T02:0… ♥ 5
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgxV2nv2-DWQErUPRlZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwvU4-OAzgeHR5xvNx4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugz5ilQaFjat2eOlU594AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwcMf_YJmxEk9bf9EN4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyTc8OS9NI2O8bzrRl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyO-aMZoGG5Flo2xp14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwsXX2uRCoM6-oPknl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"approval"}, {"id":"ytc_Ugx_vLd4r-2lxdGMEsd4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugx0SAtWuWzgIrkArmt4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxGNOYYSz7QqdDLkj54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"} ]