Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
> (a) the test’s probability estimates are systematically skewed upward or downward for at least one gender; This is undesirable because it results in inaccurate estimates. >(b) the test assigns a higher average risk estimate to healthy people (non-carriers) in one gender than the other; or (c) the test assigns a higher average risk estimate to carriers of the disease in one gender than the other. Why are these undesirable? I would expect both of these to hold true in a fair algorithm. One gender is more at risk of the disease than the other; risk estimates for that gender should be higher on average, or what you're calculating isn't a risk estimate. If you knew ahead of time who was healthy and who was a carrier, you wouldn't need a risk estimate, so the average estimate for members of that gender should be higher whether or not they are a carrier. That's if literally the only data point you're looking at is gender (not a very effective risk estimate); if you try to give women and men the same average risk rating when more of one gender is a carrier than the other, you're asking your algorithm to lie to you.
reddit Cross-Cultural 1539206251.0 ♥ 35
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_e7jcup6","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_e7j520q","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_e7j7w3s","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"rdc_e7j89pj","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"rdc_e7jcxyx","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"} ]