Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The problem is they are attempting to make it "Unbiased" which is unfortunately impossible and not realistic. If you ask it *in English* (which is a key term here). To show you 100 pictures of doctors, the expected result as per a 2019 report would be 56 photos of white doctors, 17 photos of Asian doctors, 5 Hispanic and 5 black with the rest basically unknown. This unfortunately would be "racist" generation by today's upside down logic. There is more to the story, I believe an AI would be smart enough to understand, and obviously not have stupid human bias to show more white people than anything else, and i do believe some people at Google (when considering refused to show black when even prompted) had good intentions to fix this obvious fault. But also knowing google, oh there def was e a ton of incompetence. So "OH RACIST GOOGLE" is a half truth. Curiously it has no stats for Indian physicians which we all know are plentiful (okay Indian races are apparently considered "Asian" races? Because continent of Asia? Okay.... I mean "technically" I guess... they don't share much in common with one another so i wouldn't consider it the same race but whatever....)
youtube 2024-03-25T16:1… ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionresignation
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgzF0lOVVic0nTsjrqB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyTOY2zGBER34OSCjJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugwi1JZX6OhH2cSBkRd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugzw-7giaSWCdZYUeEt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyZEigjF46-1wcYeqV4AaABAg","responsibility":"user","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugy-ql3fW4uiY-W88Pd4AaABAg","responsibility":"company","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwL1zDiQ4CS5cwfdzV4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxwekT3JDYvHpBsl_54AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzV54YoSwAAp44On8l4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugz2kcI_ac1BjlRYOP54AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"} ]