Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Submission Statement. This is an interesting interview, and I was especially interested in the fact that Microsoft has got the hallucination problem down to 0.5 - 1% with medical diagnosis. That makes AI as good or better than most human doctors. In all the talk about the economic impacts of AI, I'm always struck by how little people talk about deflation. We're living through times with the opposite problem at the moment, but that won't last forever. If Microsoft can make medical diagnosis this good in 2023, very soon others will, and in a few short years that tech will be open-source, common and freely available all over the world. All other AI had followed this trajectory.
reddit AI Responsibility 1692018630.0 ♥ 4
Coding Result
DimensionValue
Responsibilitynone
Reasoningutilitarian
Policynone
Emotionapproval
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_jw72hc9","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"rdc_jw7qzzt","responsibility":"none","reasoning":"consequentialist","policy":"liability","emotion":"indifference"}, {"id":"rdc_jw85ojl","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"rdc_jw4vo9t","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"rdc_jw537ze","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"approval"} ]