Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I thought its All Indian AI, hehe, thats replacing IT services team and Industry…
ytc_UgyRkHDZm…
G
I dont get why AI can't just use the wikipedia commons database. Theres a bunch …
ytc_UgzbO1BqI…
G
I think people are underestimating the general psychological profiles of people …
ytc_UgzWiIXAj…
G
Its ALL about the money to make. Everything goes out the window when billions or…
ytc_UgzpxmYg5…
G
Y'all realize musk is using AI in all his businesses right? Space X, Tesla, neur…
ytc_Ugw8PUT3_…
G
Conspiracy theory time, the only possible endgoal for after the AI replaces ever…
ytc_UgwnU9DFH…
G
if, at some point in the future, ai art became all there really was for art, nob…
ytc_Ugxlzvs5c…
G
Unfortunately, facial recognition tech is constantly becoming cheaper and easier…
rdc_iyz99d2
Comment
An interesting paper published in the journal of Psychological Science in 2018 looked at cross-cultural differences between international databases of achievement in STEM programs, and found that the lower Gender Gap Index of a country, the more likely it is to have "equal" rate of women to men among STEM graduates of universities. That is to say in countries with high Global Gender Gap Index like Finland, Norway, and Sweden, they tend to have significantly lower rates of women graduates of STEM programs (~20-25%), whereas in some of the countries with the lowest Global Gender Gap Index like UAE, Turkey, Algeria, they have some of the highest rates of women graduates of STEM programs (~36-41%).
The paper is titled "The Gender-Equality Paradox in Science, Technology, Engineering, and Mathematics Education" DOI: 10.1177/0956797617741719
In the Nursing and Programmer example, you mention data reflecting hidden biases in society, and certainly there must be some hidden biases that influence this population distribution. But it would be apt to also note that bias can exist in the way the data is presented to. This bias is called "Algorithmic Fairness" and is used by google, and ties in with data manipulation mentioned in section 5, though arguably it's not "malicious".
At its core, algorithmic fairness manipulates data to over-represent groups. There are examples of this that anybody can test out, where results on image searches produce nearly 50:50 results between two subpopulations, despite their actual ratio in reality not being 50:50. This isn't malicious, but it could be harmful all the same. In an ideal world where there is true equality, we can pursue whatever career we want without worrying about the statistics of who makes up what job. And in the most "equal" societies, we find that there are fewer women in STEM. While using algorithmic fairness to show even pictures of men and women might make women in STEM feel better, it might also make women who don't wish to pursue STEM feel bad for not contributing to that equality. And this may sound silly, but we see the consequences of this in the "STEAM" movement where they try to include Arts into STEM to be more inclusive of women.
Ultimately this boils down into the problem of equality of outcome, versus equality of opportunity. We know that in the most equal societies, they have close to equality of opportunity but not enough equality of outcome. And we know that in the least equal society, they have close to equality of outcome, but nowhere near enough equality of opportunity. The key question, then, is "Should algorithms reflect the actual data even if it's biased, or should algorithmic fairness be implemented to to makeup for biases hidden in society?" because there are arguments for both sides. If we truly believe that more equal societies are better, I think there's merit in accepting disproportionate gender representation.
youtube
AI Harm Incident
2019-12-14T04:3…
♥ 8
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwdzQf4Z81Wub_oBNh4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzKdgOX1tqdrJ-LX8l4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugx17723EZEsceZt_yp4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwJjjxAxVRWcecmWyN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyRuJzvS40auV0Pk7V4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwIFEfyAHEN7eFJOHF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzVtaD4ShO5brx3M9R4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxFtDEwbaEIkOGAyr54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzK-DjV2ISsCeBaM2B4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UgyolKZJVkQldyGTCjh4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}
]