Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Things will get worse Not better, even the Lord says in His Holy Word The Bible:…
ytc_Ugw-fqZw3…
G
@Dak_jadav not really, from my understanding, the way AI works is it looks at o…
ytr_Ugz4Mg6tF…
G
If a robot put its arm up and proclaimed "I'm alive"! Id be a bit worried lol…
ytc_Ugx5zQsQa…
G
Where would I express my feelings to AI art companies that are doing this, and h…
ytc_UgyZsIaX2…
G
If AI takes all jobs, then no one works, then no one wins money then no one spen…
ytc_UgxNiZ9AM…
G
it seems to me that it is programmed to lie, it is programmed to do the lip serv…
ytc_UgwKeQ6mi…
G
Me: So sir, what me you think that Digital Drawning are soulless as AI?
The idi…
ytc_Ugz-bybKI…
G
It's hilarious seeing all these posts saying how they hate AI art but they only …
ytr_UgwILiczX…
Comment
>Gender bias will emerge in a well-constructed algorithm if gender correlates with performance.
I think the critical factor here is how you define "performance." In this case, at least one article I read stated that successful "performance" just meant getting hired by Amazon. Since human managers preferred male candidates, the machine also learned to prefer male candidates. You could also define "performance" by how long they stayed with the company, how quickly they got promoted or by their quarterly evals. Every single one of those performance measures would be tainted by gender bias in almost any STEM field. Studies that show that the same resume and accomplishments are valued less if they are attached to a woman (by about 20% in the study I remember). Leadership traits like assertiveness are rewarded in men and punished in women. In a pool of highly educated and accomplished candidates, subjective factors will always be the deal-breaker and those subjective factors tend to be biased against women in most STEM fields. Trying to use an AI model to find the candidate who will perform the "best" will only propogate the problem so long as performance measures undervalue women (and other minorities in a field). Untangling that knot is a far more complex task than most people acknowledge, and I don't think computer engineers alone are going to be able to do it.
> Sure, it is possible that the training data was invalid, but there's no way in hell that Amazon is employing amateur modelers who can't obtain a valid dataset and prepare it properly.
This is actually a really important point, because it implies one of two things are true. 1) The Amazon modelers realized that there was an inherent bias against women at the company and developed an AI that would model this and thus prove there was a problem within the company. 2) The otherwise well-trained Amazon modelers did *not* realize that women faced systemic discrimination in the field of engineering and ther
reddit
Cross-Cultural
1539224133.0
♥ 4
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[{"id":"rdc_e7jm1ke","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},{"id":"rdc_e7jgcg1","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},{"id":"rdc_e7jcw1i","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"resignation"},{"id":"rdc_e7jva6y","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"rdc_e7jcktr","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"approval"})