Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Wait we are already using AI for these purposes. I thought we already knew that AI is usually bad for important cases like these. If you want an unbiased system that works at near 100% of all cases with bias/mistake, I thought you would use a human made system rather than machine learning. Cause you cant prove that the AI is always doing the right thing for the right reasons for all possible cases. Where as for most human made systems you can theoretically mathematically prove it to be the right in all cases. We cant know why an AI is doing something, we can only say what it did in a specific situation. I would suspect all datasets that you can train an AI on to include bias. For instance here is a couple correlations an AI might see in data that lead to the wrong conclusion that we want it to make: In the case of arrests, google's immediate results states "Black americans are incarcerated in state prisons across the country at nearly five times the rate of whites" so from that statement an easy correlation we can probably note is that black people are likely to be incarcerated, and therefore they should be put on a watch list. Similarly in healthcare african americans typically have diets high is sodium/fat which could be seen in soul food, which is one of the reasons why they typically have higher incidence of hypertension/diabetes and other comorbidities. So going by this fewer black people should recieve transplants, and more should be flagged as being too high risk for surgery. The reason why stereotypes exist in the first place, is because they are more often true than false. Those were are true correlations that exist, but they are correlations we want the AI to ignore. Because like ourselves, when it comes to important decisions we want to rely on specifics of the situation and not the general abstraction. Of the easiest/simplest methods the AI could model how to categorize people, the most accurate method is to resort to stereotype. And we cant possibly test that the AI isn't simply applying a stereotype to all possible groups of people. But if you were to continue with an AI approach to these problems regardless. To edit the data that causes stereotypes would be to by definition bias the data and could result in worse results. So the only thing I could think of is to allow for more data, more training, and more testing. And dont release it until we are confident that it makes these choices better and with less bias than a human.
youtube AI Bias 2022-12-20T22:4… ♥ 2
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyliability
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgxF74ucwreYD3aouaZ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgyuIC4MkBfs-3JAwed4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugw3gr8vEGlqBVn5otF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwnLRu7mduPTp1Tr3t4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgxlepqNpGsjjJaEA_Z4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxBHqr0uXCQjYKsWtF4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgyHYnxCi5isyHcJ4wl4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgzAzS4J5AhXzXzQu5F4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgyPPhuKQbFdHv0wyJx4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxpSCuZAoqLWQj6X-14AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"mixed"} ]