Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
That's true, a doctor could receive an image and it might have a label attached saying "flagged for possible pneumonia" or something like that. I don't think that would actually save much time for the radiologist though, since they still would have to do their full check for all the conditions they could see on a chest X-ray that aren't pneumonia. Another issue they discuss in the review is that it's hard to predict when the AI will be wrong, and when it is wrong it can be catastrophically wrong in a way a human wouldn't be. This is a major issue with AI: since we let it detect the patterns itself we don't actually know what it's looking at or what can cause it to get tripped up. This means that everything needs to get reviewed by a doctor anyway, and they need to be thorough in the review.
reddit AI Bias 1569422684.0 ♥ 8
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[{"id":"rdc_f1echk6","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_f1ehabd","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"rdc_f1eh82v","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"rdc_f1eiyq4","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"rdc_f1ei00f","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"})