Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
*the other robot making the box mistake*
Other robot:oops my bade
The robot:YO…
ytc_UgxlpF5Lx…
G
Yeah it doesn't take a computer to steal art. Art thieves always have and always…
ytc_UgxrCOP25…
G
im a bit baffled by how people approach this. superintelligent ai will in about …
ytc_UgxpuvyLd…
G
They're expecting college admissions to drop significantly for future generation…
ytc_UgwWwendd…
G
Wait, how many self driving cars are there on the road? How many deaths did they…
ytc_UgwCRS5qQ…
G
@FlightlessTuatarato me he seems to be saying that LLMs have produced insights …
ytr_UgxJ0_RUc…
G
Copy & past in Grok or Gemini. Ask it to consider each point without mainstream …
ytc_UgwV_5o24…
G
I feel like this is AI interviewing the United States right now! 😂 #maintenance …
ytc_UgxBLj3el…
Comment
> An easy example would be with an algorithm designed to identify the best computer engineers. Women (for whatever reason, cultural or otherwise) don't tend to major in computer engineering and those that do are less likely to get post-grad degrees or apply for competitive internships
In which case why not include features to identify post-graduate education and competitive internships? If you're measuring solely based on their educational qualifications and experience, including gender in the model would likely be insignificant if it were collinear with education and experience.
I get what you're trying to say but as someone who also does this kind of thing for my day job, theoretically you should be able to build a model that accounts for any gender biases without needing to include gender as a feature. If I were to build such a model I would likely try that rather than go through the shit-storm of having a model that blatantly takes gender into account.
> The proper way to "de-bias" these models without sacrificing validity is to figure out why the differences emerge in the first place; to fix the problem at its source
And you are completely right here, the goal would be to understand why the biases exist and fix them at the source (e.g. include the features gender is just a proxy for).
reddit
Cross-Cultural
1539204800.0
♥ 9
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[{"id":"rdc_e7j94ef","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"unclear"},{"id":"rdc_e7l28qc","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"approval"},{"id":"rdc_e7jb51r","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"unclear"},{"id":"rdc_e7je0wy","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},{"id":"rdc_e7ihc49","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"indifference"}]