Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
In today political climate, if you look for something that is offensive, you will find it because everything in a certain light can be offensive. The issue is that videos like this, that make it seem like the programmers are racist or whatever are making things worse. A popular ai called chatGPT REFUSES to give you any data on race. For example, if you ask it what is the most common race for a criminal (African Americans I think) it won't give yiu an answer, it'll just say something like it's not appropriate to generalize races. This is not racist, it's a peace if data. I'm sure that there is more to this story that this guy is not saying wither becuase it is a short or becuase he wants to make every ai seem racist. The issue is that now, ai like ChatGPT is withholding information becuase it doesn't seem to be "politicaly correct." As for the johns hopkisn study, there has got to be more information about that that is not talked about on this video becuase ai's are literally incapable of having opinions. As for the thing about the guy that was marked to be more likely to commit a crime, maybe he was. Just becuase he's black that does not mean that the ainis racist maybe he was actually more likely to commit a crime. And if he was shot twice after that, so what? That just proves that the ai made the right choice, or maybe he was shot by a completely racist cop, but YOU DONT TELL US. You are only telling us the things that make theprogrammers, ai and the data set look bad. Its called selective data. As for the thing with the hospital, maybe that's just bad programming or maybe balck people jsut have slightly strong immune systems. Idk exactly how much more the ai said they had to be sick becuase again, you don't tell us. Moral of the story? There's usually more the story, don't believe everything you see on the internet or maybe I'm wrong and the data set of these three ai's are actually bias, I haven't done any research on it. But this seems highly suspicious as two government used programs wouldn't have bias based data sets, that's the last thing the government wants.
youtube AI Bias 2022-12-23T01:4…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policynone
Emotionresignation
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgztArevrT3D0UTEuCZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugzp_mEaamuy2MeYiot4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxH2HnU9Q-okGrbMFZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyDObWHwgVr9BofvwF4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugwj0I3QGl84UKdx9bF4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzXCTznBULhKIWDx714AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzTevvZUa7WDqE7D814AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugz0JG4CqcuCZsG1GcV4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugx4diO-6YXazGQmz9J4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwhpYfysFZLTOgNdnN4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"} ]