Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I can't believe Elon just said so you can jump in a car go to sleep and wake up …
ytc_UgzcaimfR…
G
While I mostly agree, especially with the last part, the one thing you said that…
ytr_UgzdqoRTv…
G
ai will meet its reckoning. there’s only so much average people can take. news b…
ytc_UgwXXE-MX…
G
Trump will choose money over people, every day, he has less than zero concern yo…
ytc_Ugzvmz_mo…
G
He's on point with the social-economic analysis. Either AI benefits all of us or…
ytc_UgwJbnuWh…
G
People in comments actually believe this is real. If AI was this advanced they w…
ytc_UgwJS67D0…
G
Hes appearing ignorant and clueless on purpose, he really has great understandin…
ytc_Ugw7ORobi…
G
i think once they get the payment/credit card swipe the trunk will open and you …
ytc_UggqFfzL0…
Comment
I too have worked in AI and this is a very awkward way to present the issue here.
You act as if the public misunderstanding is that people think AI is biased, but no one looks at AI models and thinks they're biased, regardless of how unfamiliar they are with the techniques involved. The problem is exactly the reverse: people profoundly overestimate how unbiased they are because they fail to take into account bias in the training data and in the measurement you're using to calculate error.
And anyone working in AI knows that you *constantly* discover training biases you didn't expect. It's extremely common to look at the output and see an unexpected pattern then realize it stems from an undesirable bias in the training data.
It is very easy to imagine a modeler making exactly the mistake described here. There is relatively little in resumes that indicates gender, so they may not have expected that the model would learn that most existing hires are male. The resumes were likely anonymized in the training data, so the most obvious cue to gender was gone. The model being able to infer gender from college is precisely the sort of accidental bias that you might overlook.
And on the more technical side, it is not even a little bit true that these models are totally straightforward and objective and fool-proof. Sometimes models don't converge at all, sometimes you run into local minima, sometimes you get overlearning, etc. An enormous amount of work in tuning a lot of kinds of models involves tweaking the dials until the output looks more or less like you expect it to.
As someone who has worked with a lot of AI models, the assumption that the model was good and this is just politics is extremely silly. This is just wild, ideological speculation.
reddit
Cross-Cultural
1539214788.0
♥ 13
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[{"id":"rdc_e7jm1ke","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},{"id":"rdc_e7jgcg1","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},{"id":"rdc_e7jcw1i","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"resignation"},{"id":"rdc_e7jva6y","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"rdc_e7jcktr","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"approval"})