Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
From a statistical and robotic standpoint devoid of emotions, the AI is right, in the scenarios, it’s choice are right, bc it maximizes the potential while trying to minimize risk, if a black man and a white man need treatment, from a (not right emotional but sound logically) logical and statistical standpoint it’s better to treat the white man first since the black man has a much higher chance of dying due to being murder so you’re prioritizing the individual with the most likely chance to live-live in length, now that standpoint is devoid of any emotional intimacy which is why we see it as racist, however from a pragmatic and overtly logical stance it isn’t wrong if you wanted that racial bias to be minimized you’d reduce the variable that causes blacks to be murdered more aka gang violence(you could argue black on black crime but I’d lean more towards gang violence) there are many logical pragmatic views that are devoid of emotions but are correct yet racist, if you ran an ai that’s goal was to ascertain individuals who have more societal worth, you’d end up with a data set of a majority of white people, yes it’s racially bias, but that bias comes from crime rates, murder rates, and higher education rates(which you could argue that latter is low due to racism but you’d be totally wrong and an idiot, the flaw isn’t in the ai, it’s on the fact that there are inherent differences in different social groups
youtube AI Bias 2022-12-15T00:5…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgygfgufDvSaflenMJF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgyiuslHDo_SkdPMMVZ4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyNRnYKA7J8ZQkxQAh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"}, {"id":"ytc_Ugyg6ocp1f2dpEvI2OV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwwB2mfelN6HTsLhxp4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwmqP483ZUCK5-OC-J4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugy7ru4ZcpQUOlSSM214AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgwlfdfGCDaQ-hd9o0N4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugx3xbAURnu2BaP3T8l4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzEtg2a-xpKELQ-oTB4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"amusement"} ]