Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Right, and that's why it's completely silly that this article is in /r/worldnews at all. So they have a bad ML algorithm, either because it's buggy or the input was garbage. What's the big woop? That's probably the case with every research group in the world that has a WIP ML project. This technology was never actually used to hire anyone, it seems-- no one was ever victimized and no one is suing anyone. ZipRecruiter is doing this kind of work too. And they probably have biases present as well. As is the case with self-driving cars, or photo recognition, or whatever else. I just have no idea why this is at the top of this sub. Is it because the flashy title contains "AI"? This is just statistics on big data. Or is it because Business Insider is milking a politicized subject for clicks? For christ sake, the article ends with, "Amazon told Business Insider it was committed to workplace diversity and equality but declined to comment further.", even though nothing even happened. I think that ethical concerns in the face of increasing ML and deep-learning capabilities is very important and interesting. There's lot's of cool activity going on right now in philosophy on the subject. This article just seems shallow and shows little interest in having those technical conversations
reddit Cross-Cultural 1539199839.0 ♥ 567
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_e7jcup6","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_e7j520q","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_e7j7w3s","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"rdc_e7j89pj","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"rdc_e7jcxyx","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"} ]