Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
There's a massive leap to go from AI taking jobs to AI controlling nukes. That i…
ytc_UgxDN-tHo…
G
@gonzalobarragan8076human artists DO learn from other artists, but they can nev…
ytr_Ugx-KZjOH…
G
CEOs legit dont need to exist. I think the most they do is attend a bunch of mee…
ytc_UgxJhmL7B…
G
AI arms race is gonna be terrifying. He’s going “guardrails are good” but “we ne…
ytc_UgxV15yNw…
G
Even with AI they can't help but lie and be racist? How to be a loser 101. Every…
ytc_UgxQBjnkA…
G
Yes it will…. No jobs out there…. Remaining are up for competition from outsourc…
ytc_UgyxBu9Hp…
G
Training large models with massive amounts of IP isn't ethically problematic. Yo…
ytc_UgwTYJeGk…
G
Every invention i believe is for good use . But u end up abusing it…
ytc_UgzkkqXNB…
Comment
As a Latino from a working-class family with a masters in AI… this article is trash. Yes, there are tons of problems with its application, and they deserve the engineering time it takes to preempt or solve them. Yes, sometimes upper management decides to ignore them. But in general, these things almost always *are* bugs that stem more from laziness than some nefarious plot
“to uphold the systems of racism, misogyny, ability, class and other axis of oppression.”
All AI is doing is looking for patterns in a dataset to model. You can layer logic on top of it when you don’t like the pattern (and this is often what the “bugfixes” do) or change the dataset, but until you have a model it’s hard to predict what patterns it’s going to pick up on. The whole reason we use AI is because it’s hard to find the hidden patterns ourselves. That’s why these things are usually retroactive.
Spoiler… these patterns are often white-centric because the data is, and that’s a hard problem to solve. Health predictors work better for white people because minorities don’t go to the doctor as often. Crime predictors are biased because cops police black neighborhoods harder. Face recognition on iPhone apps might be better for whites if whites are more likely to have iPhones. These companies often have programs to get more minority representation in their datasets, but that’s difficult too: minorities are usually suspicious of them. I’m not saying they shouldn’t be, just that it’s difficult to have a fair model when you can’t get fair representation in the data.
reddit
AI Harm Incident
1625888014.0
♥ 21
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | virtue |
| Policy | industry_self |
| Emotion | outrage |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_h4midsc","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"rdc_h4o62b7","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"rdc_h4o8wuv","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"rdc_h4o6xun","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"rdc_h4nz5dj","responsibility":"company","reasoning":"virtue","policy":"industry_self","emotion":"outrage"}
]