Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Thank you for your comment. If the models really only selected on things like programming qualifications, then I would see no issue. The problem is that they DON'T do that (or at least, didn't). They select on variables that have nothing to do with actual qualifications and simply reflect the (intentional or unintentional) biases of those who made previous job hiring decisions. And you are right that women are underrepresented in STEM fields. But there are ways to fix that too. A personal example, at my institution, Carnegie Mellon University's Computer Science dept. It is one of the best departments in the world and was, like everywhere else, incredibly male dominated in terms of undergraduate students. And the story was that there just aren't enough girls in high school interested in STEM to get "high quality" undergraduates. Well, that was entirely based on beliefs that were just inconsistent with reality, but consistent with pre-existing biases that folks here at CMU had. So, CMU, to their incredibly credit, decided to fix this. They changed how they considered admissions and focused on a holistic take on students, rather than on things like "do you already know how to code before even coming to college". In doing so they increased female representation in the undergraduate CS program to ~50%...and, critically, the quality of students graduating did not drop one bit. As in, it's not that the school took underqualified female applicants...rather they redefined what it even means to be qualified. And without ever watering down the curriculum, that redefinition resulted in equitable gender admissions AND kept standards high. It can be done (see details here: https://cacm.acm.org/magazines/2019/2/234346-how-computer-science-at-cmu-is-attracting-and-retaining-women/fulltext With algorithms, it's tricky, but there are new approaches to this. You can penalize AI systems while training them if, for instance, gender bias is detected after the fact (this is more technical than I care to get into here, but if you're curious see here: https://hbr.org/2019/11/4-ways-to-address-gender-bias-in-ai). You could do all that is possible to ensure equitable representation in training data (e.g. facial recognition training datasets use overwhelmingly white faces and so they are bad (lots of false positives) in identifying faces from people of color). There are ways to fix this....they aren't easy, but it can be done.
youtube AI Bias 2021-06-11T12:4… ♥ 2
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyliability
Emotionoutrage
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytr_UgybE2mrAwiiha8yxFx4AaABAg.A3dNQfvqMGOA3vVBfRPz41","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytr_UgybE2mrAwiiha8yxFx4AaABAg.A3dNQfvqMGOA3vWiOma9_1","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytr_UgwBAUfLX6YFD02b-Bp4AaABAg.9qW2HrS7X7H9tDy4hdVZxO","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytr_UgwcGRjUJPJjv9WZLHp4AaABAg.9q9fXxd_1VoA2UhlVIGRX8","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytr_UgwcGRjUJPJjv9WZLHp4AaABAg.9q9fXxd_1VoA3fOrkoLjte","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"}, {"id":"ytr_UgyXqu0AGU38a3H5MVZ4AaABAg.9pxFu_zWhgLA3fAO5imfI1","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytr_UgzdmLcy44dOdN0vM3N4AaABAg.9OaK7PR8sXG9OgnhbO10FJ","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"}, {"id":"ytr_Ugy-6u5XtkH_DMXs74d4AaABAg.9ORjTKQifOv9OSF7YVMPzT","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytr_Ugy-6u5XtkH_DMXs74d4AaABAg.9ORjTKQifOv9OSQLJBfgfn","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytr_Ugy-6u5XtkH_DMXs74d4AaABAg.9ORjTKQifOv9P_janyxgcS","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"} ]