Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I hate the use of AI for all of the reasons states, but ALSO because of the envi…
ytc_UgwXk3Prt…
G
this is why the advent of ai is a genuine mistake we made as humanity…
ytc_Ugy8Fk-i5…
G
I don't think it's always the case . People do change art style but it's not lik…
ytr_Ugy-YCiru…
G
23:33 - sorry, but these characters look very "neural" to me. I have no clue abo…
ytc_UgzHUqAcO…
G
If every car on the road would be self-driving and intelligent, I think these sc…
ytc_UggqSiIbU…
G
I am wondering how AI shuts down the grids of the world without destroying itsel…
ytc_Ugw6oybp0…
G
it's scarier that we allow scum and criminals to run in the first place. at leas…
ytr_UgwozVxqQ…
G
I got my middle sister (who isnt an artist) and asked her about the drawing in m…
ytc_UgyLKgQK2…
Comment
Algorithmic bias is actually really tricky to deal with. It can be mathematically proven that three notions of fairness (that would be quite reasonable to expect a fair algorithm to respect) are actually incompatible with one another. Without being overtly technical, this is the essential result from the [paper](https://arxiv.org/pdf/1609.05807.pdf):
> To take one simple example, suppose we want to determine the risk that a person is a
carrier for a disease X, and suppose that a higher fraction of women than men are carriers. Then our results
imply that in any test designed to estimate the probability that someone is a carrier of X, at least one of the
following undesirable properties must hold: (a) the test’s probability estimates are systematically skewed
upward or downward for at least one gender; or (b) the test assigns a higher average risk estimate to healthy
people (non-carriers) in one gender than the other; or (c) the test assigns a higher average risk estimate to
carriers of the disease in one gender than the other. The point is that this trade-off among (a), (b), and (c)
is not a fact about medicine; it is simply a fact about risk estimates when the base rates differ between two
groups.
This issue was first brought to mainstream attention by this 2016 [ProPublica article](https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing), where the risk would criminal reoffending, and we would replace "women vs men" with "blacks vs whites." Analogously, this would also apply directly to **any** decision process used to hire employees, regardless of it being done by humans or ML.
reddit
Cross-Cultural
1539201907.0
♥ 1001
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_n7i6902","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"rdc_n7i75mz","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"rdc_n7i7j11","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"rdc_e7im7tm","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"rdc_e7j7mps","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"}
]