Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Algorithmic bias is actually really tricky to deal with. It can be mathematically proven that three notions of fairness (that would be quite reasonable to expect a fair algorithm to respect) are actually incompatible with one another. Without being overtly technical, this is the essential result from the [paper](https://arxiv.org/pdf/1609.05807.pdf): > To take one simple example, suppose we want to determine the risk that a person is a carrier for a disease X, and suppose that a higher fraction of women than men are carriers. Then our results imply that in any test designed to estimate the probability that someone is a carrier of X, at least one of the following undesirable properties must hold: (a) the test’s probability estimates are systematically skewed upward or downward for at least one gender; or (b) the test assigns a higher average risk estimate to healthy people (non-carriers) in one gender than the other; or (c) the test assigns a higher average risk estimate to carriers of the disease in one gender than the other. The point is that this trade-off among (a), (b), and (c) is not a fact about medicine; it is simply a fact about risk estimates when the base rates differ between two groups. This issue was first brought to mainstream attention by this 2016 [ProPublica article](https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing), where the risk would criminal reoffending, and we would replace "women vs men" with "blacks vs whites." Analogously, this would also apply directly to **any** decision process used to hire employees, regardless of it being done by humans or ML.
reddit Cross-Cultural 1539201907.0 ♥ 1001
Coding Result
DimensionValue
Responsibilitynone
Reasoningdeontological
Policynone
Emotionmixed
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_n7i6902","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_n7i75mz","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"rdc_n7i7j11","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"rdc_e7im7tm","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"rdc_e7j7mps","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"} ]