Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The questions I have are these: - do humans and AI make the same kind of errors? Is the AI missing things that could be obvious to a human expert or vice versa, implying that using both would allow detection rates neither can achieve? - How good is the sample data, really? When we train visual AI on something like facial recognition, we don’t have to be concerned that we’re teaching it our biases because we haven’t got any, we’re nearly 100% at being able to decide if there is a human face in front of us. But we can’t know which images, in which *we* could find nothing, could have subtle features that machine learning could indeed find. It seems to me that at best visual AI could be as good as our very best, but if we want it to find what we cannot, it seems to me we have to find a way to train it do so.
reddit AI Bias 1569419182.0 ♥ 2
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[{"id":"rdc_f1emvcy","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"rdc_f1e7zyw","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_f1ecjca","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"rdc_f1ecudu","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"rdc_f1ez3fw","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"})