Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Anyways, the second point you bring up, about Hospital Treatment, that's absolutely true, there are literally thousands of studies showing how Black Americans specifically, and to a much lesser extent most other non-white racial groups need to be in _significantly_ worse condition/state to recieve urgent care (skipping the ER line) or pain medication when coming in with things like broken bones, etc. So there is absolutely no doubt in my mind that the AI making decisions about care priorities is going to be influnced by the existing medical records and then continue to make choices that align with those pre-existing records. But yeah, that said, like i wrote in detail in my first comment, you deliberately leave out an _awful lot_ of cruicial, key information about the circumstances and what actually happened. From what i'm hearing, it sounds a hell of alot like the AI was spot-on, just not in the way that everyone expected, including you. Your _own_ bias makes you believe that if an AI flags a Black Man as "99.9% more likely to be involved in a shooting" that must mean the AI thinks HE is going to shoot somebody. Maybe what we need to think about are our own cognitive biases, not a predictive AI's, cause it seems like as far as it's prediction about Gun Violence, it was right. The poor man got shot twice. Maybe if the cops weren't so busy looking for signs he was going to shoot someone else while they were surveilling him, they could've worried about it equally from _both_ angles. To shoot, or to be shot.
youtube AI Bias 2022-12-23T03:2…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyliability
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugy4KxxlERLxFGZMUWh4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzUcODYWYp-tY58Io54AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgzFUTkKwQHGBKpm6sF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"}, {"id":"ytc_Ugx9DeRcqUWLuH0dYJR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzvbpkVuvhybFwv6W14AaABAg","responsibility":"developer","reasoning":"mixed","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgztGp6E7FEMmR0KZIh4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"}, {"id":"ytc_Ugwl8tB54vBUAuDJu6x4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzceN63p7Wjwiatmyl4AaABAg","responsibility":"user","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_UgzA57hfAzVHCz2j3RB4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgxqphEyrIMxHIdGY1V4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"} ]