Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Articles like the one linked are highly misleading. They know before deploying that there's is going to be a lot of false positives. That's by design because these kinds of algorithms when used for this purpose needs to have as low a false *negative* rate as possible, which necessitates a higher rate of false positives as a consequence. They don't care if they have to sift through 2000, 20,000 or 200,000 false positives because the software just eliminated 2,000,000 others they no longer have to waste resources on sifting through manually. Journalists trying to frame it as the police somehow accusing thousands of people of being criminals are doing everyone a disservice.
reddit AI Harm Incident 1548817887.0 ♥ 13
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[{"id":"rdc_efbps1n","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"rdc_efbotk1","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_efbdgve","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"mixed"}, {"id":"rdc_efbowss","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_efc3v49","responsibility":"government","reasoning":"mixed","policy":"none","emotion":"resignation"}]