Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
AI is based Well joking aside. It need to bear in mind that AI only output what it is trained to do. The problem is most likely what is feeded with. Take a median income of black people from the hood versus millionare, compare them to access to healthcare and you can imagine millionare get better access for healthcare more and thus would frequent it more. In comparison the average value for black person would be dragged down by the poor people in the hood who would access healthcare less especially on 'less important' problem. So it most likely see that white people are administrated at earlier onset than black people does and just repeat it. So the real question is... Why would you ask the AI account for the skin colour like it is important parameter to consider. Or socioeconomic status for that matter (though i would expect them to do the later because they want money).
youtube AI Bias 2023-10-25T09:2…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyindustry_self
Emotionapproval
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugw__5B11KgCK7pfGwh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgyCtS-aiofFqM_nhAZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxeSEhIg85cA7M7l2F4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgwoGKLQBx5DmhyCe_54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"ytc_Ugzu9K_SOLZWGkwyc6x4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzLEfY5-DlUuKDuxgJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugx7PatFcL1Gv7d6N-d4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgzQGBTzq1frzgzb_8h4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgzH444tL0BXHNC0TMV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwuWd15bnxtex6l67F4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"outrage"} ]