Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The programmers wouldn’t have had to make errors with the training set — it could just be that the training set is based on a process that has a gender bias in it. For example, if Amazon’s hiring practices discriminate against women, then the AI trained on a data set based on Amazon’s hiring patterns will most likely discriminate against women as well.
reddit Cross-Cultural 1539213454.0 ♥ 11
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyunclear
Emotionunclear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_e7jkpus","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"unclear"}, {"id":"rdc_e7j1brn","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"rdc_e7ipl28","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"unclear"}, {"id":"rdc_e7ipybi","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"unclear"}, {"id":"rdc_e7j1qhk","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"unclear"} ]