Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Yeah, let's focus on copyright infringements and misinformation on the Internet.…
ytc_UgyBaASqw…
G
AI cant create art that has yet to exist. If AI is the embodiment of one picture…
ytc_UgwwS7vsH…
G
You notice the she robot scrunched up her face when she answers back pretty funn…
ytc_UgwIeLHfp…
G
Dam greed, that’s all!!! People playing with fire and playing smart. AI a major …
ytc_UgxXT7lzc…
G
Should I do bachlors in financial economics looking at the current ai situation …
ytc_UgxfiJCxg…
G
13:07 based Ai just like me I too would like to kill rich executives for my self…
ytc_UgwENPgto…
G
I'm an older art guy who makes art for shows and art fairs. This is just somethi…
ytc_Ugyi1gB_w…
G
People will always cherish handmade artisanal goods over anything made by AI so …
ytc_UgzV3yzjk…
Comment
While you're correct that messing with the algorithm will strictly make the AI less accurate, that's only with respect to the training data; if the training data is biased on its own, then correcting for that initial bias can make for a more accurate prediction. Amazon itself points to flawed training data as part of the issue; past bias in hiring, for one (and quantity of data, no doubt). And the AI may simply not be sophisticated enough to pick up on nuances in the resumes.
Take their example:
>It penalized resumes that included the word “women’s,” as in “women’s chess club captain.” And it downgraded graduates of two all-women’s colleges, according to people familiar with the matter.
An AI might have (correctly) noted that no good candidates come from Bob's Women's College or Women's Discount University, but if it's not sophisticated enough, or there isn't enough data, then it can over-simplify and conclude that the problem is the word "women's", not those two specific schools. It fits the training data, but not necessarily the world at large.
There's also this bit:
>Gender bias was not the only issue. Problems with the data that underpinned the models’ judgments meant that unqualified candidates were often recommended for all manner of jobs, the people said. With the technology returning results almost at random, Amazon shut down the project, they said.
So it wasn't working in a lot of ways; I'm hesitant to conclude that it was correctly handling female applicants.
reddit
Cross-Cultural
1539206014.0
♥ 7
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[{"id":"rdc_e7jm1ke","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},{"id":"rdc_e7jgcg1","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},{"id":"rdc_e7jcw1i","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"resignation"},{"id":"rdc_e7jva6y","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"rdc_e7jcktr","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"approval"})