Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
>A year later, however, the engineers reportedly noticed something troubling about their engine – it didn’t like women. This was apparently because the AI combed through predominantly male résumés submitted to Amazon over a 10-year period to accrue data about whom to hire. > >Consequently, the AI concluded that men were preferable. It reportedly downgraded résumés containing the words “women’s” and filtered out candidates who had attended two women-only colleges. I imagine they didn't implement a gender check, rather they probably started by hand-picking the best résumés and then letting the A.I. determine what it was that made them good. "women's" was apparently not a winning word. I don't know a thing about A.I. development though, so I could be way off the mark.
reddit Cross-Cultural 1539253685.0 ♥ 3
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyliability
Emotionoutrage
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_e7keifq","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"rdc_e7keda1","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"rdc_e7koo9j","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_e7irmvj","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"rdc_e7jqq89","responsibility":"ai_itself","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"} ]