Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The Robert situation wasn't because he was black, like it's made out to be in this video. It's because the algorithm determined he was more likely to shoot or get shot because he had friends who were involved in shootings. It was based on his social circle, not his race. And to the police departments credit, they brought a social worker with him to help him get a job and help improve his mental health so his score would be lowered. They weren't being racist nor were they trying to harm him. They were actively trying to help! Why did he get shot, then? Because all that his neighbours saw was police visiting him and people suddenly interviewing him. They didn't know what was going on and started suspecting that he was a snitch. So, after a while, some of those neighbours decided to deal with this "snitch". In the end, the police departments actions led to Robert getting shot. But it wasn't their doing, nor their intention! And it certainly wasn't because of racism! I haven't looked into the other examples brought up, so I can't say if they're correct or not. But even this one mistake was really bad. You should check your facts before you present them. If you didn't have enough time to check that last one, just leave it out.
youtube AI Bias 2022-12-20T09:0…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[{"id":"ytc_Ugy91car2S65bz57pxh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzDbfQuRegCmPbEHX14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxoKF0HCvolXXZoHOp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugzkl069KQ1dDfqCjqV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzHtYqQEJMmloxZznJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzeOoYmqDvH6eedb-N4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzHZ5vPybLv7e9i2DZ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxP2fGyF2ybOny5Q1F4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwMUsue5nLSs793_y94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwV_pwnJGWiWVoFUnN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"})