Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Okay people need to learn statistics. Nothing is perfectly balanced or exactly in the middle. For instance more than half of people are right handed. This means that a logical system like a learning program (AI) will categorize people based on all sorts of arbitrary metrics. What you're calling racism is actually an expression of true statistics but without humanity or politeness. As with the left handed example above, every ethnic and physical categorization will be more more or less involved with any specific thing. One ethnicity must have more mailmen than any other ethnicity because nothing is perfectly equal. We also have absolute numbers versus a number corrected for percentage population. So two different ethnicities can both have the most green bean lovers: one by absolute numbers, and the other as a percentage of population. This remains true for negative facts too; there will always be a leader in deaths by sepsis because nothing is perfectly balanced or in the middle. The problem is that people expect a learning program to be sensitive about ethnic statistical facts. That says far more about us than it does about the computer. We often lie by omission because the general population will get upset if they knew things were never perfectly balanced or in the middle.
youtube AI Bias 2023-01-26T21:4…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[{"id":"ytc_UgwZ6OaqgDh3_RBpF894AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwdUWRr50Y1Aj4j5GZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyOcK6JkZE9-Ddm6Pp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugw-QoaYAsUDC861My94AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwUdxC89O4aDFAW6rR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzEpN_Oe3GZ5nzvHd14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwJnxzH2uAnhceciId4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxOws3OIJR5uPoJMZh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwF7XvswhgWbLRvsaF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyvqWXhiJnoqccMNl54AaABAg","responsibility":"developer","reasoning":"unclear","policy":"none","emotion":"mixed"})