Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
> For it to come to the conclusion that men were preferable requires gender to have been categorized in the first place. Not neccesarily. You could omit gender from the input, but then compare the success rates of the genders once it has chosen. I.E. You could submit all the CV's and not label the gender. Once the AI picks the appropriate candidates, you then compare the genders of those chosen. > Are there any more in depth articles on this? There's a similar case that ProPublica went into depth back in May 2016. They wrote multiple articles getting into the details on why this could (and often does) happen with most all machine learning algorithms, because it's not the algorithm's fault but the input data itself.
reddit Cross-Cultural 1539192313.0 ♥ 7
Coding Result
DimensionValue
Responsibilityunclear
Reasoningconsequentialist
Policyunclear
Emotionindifference
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_e7ij0cd","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"rdc_e7ivqgp","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"rdc_e7jrpn7","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"rdc_e7jp9so","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_emn5ewy","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"} ]