Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I'm sorry, I fail to see your point. Each of your bullets makes me favor the capability of the AI more. Missing logical and obvious tests and has misleading or weird random data, but still gets the right answer? (So it does well with incomplete and misleading information?) Not receiving follow-up data or questions? (Why wouldn't it be able to just process these too?) Doesn't change the answers because it has decided a patient wouldn't be able to pay for the correct treatment or test? (So it doesn't have nonmedical biases?) Sure, exams aren't the real world, but these points just make me feel like the AI would continue to perform exceptionally well if it could actually ask follow up questions or request the appropriate tests.
reddit AI Responsibility 1684475501.0 ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionapproval
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_jkp2ty0","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"rdc_jkrsykt","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"rdc_jksb7z0","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"fear"}, {"id":"rdc_jkq964i","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_jkql1ib","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"} ]