Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
> An easy example would be with an algorithm designed to identify the best computer engineers. Women (for whatever reason, cultural or otherwise) don't tend to major in computer engineering and those that do are less likely to get post-grad degrees or apply for competitive internships In which case why not include features to identify post-graduate education and competitive internships? If you're measuring solely based on their educational qualifications and experience, including gender in the model would likely be insignificant if it were collinear with education and experience. I get what you're trying to say but as someone who also does this kind of thing for my day job, theoretically you should be able to build a model that accounts for any gender biases without needing to include gender as a feature. If I were to build such a model I would likely try that rather than go through the shit-storm of having a model that blatantly takes gender into account. > The proper way to "de-bias" these models without sacrificing validity is to figure out why the differences emerge in the first place; to fix the problem at its source And you are completely right here, the goal would be to understand why the biases exist and fix them at the source (e.g. include the features gender is just a proxy for).
reddit Cross-Cultural 1539204800.0 ♥ 9
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[{"id":"rdc_e7j94ef","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"unclear"},{"id":"rdc_e7l28qc","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"approval"},{"id":"rdc_e7jb51r","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"unclear"},{"id":"rdc_e7je0wy","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},{"id":"rdc_e7ihc49","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"indifference"}]