Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
My expertise is not medicine, but programming. If the "AI" is capable of diagnosing based on an image, I can confidently say that the AI will have all of the dozens of possibilities monitored with a percentage of "confidence" that it is correct. Giving these readouts would be as simple as just writing a few lines of code which might take the developer 15 minutes if they try to make it look nice.
reddit AI Bias 1569433863.0 ♥ 2
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningmixed
Policynone
Emotionindifference
Coded at2026-04-25T08:06:44.921194
Raw LLM Response
[ {"id":"rdc_f1ek593","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"rdc_f1ezni5","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"rdc_f1uj2ap","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"rdc_f1uo20o","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"rdc_f31czj8","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"} ]