Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I was just talking about this the other day. The example I used was the radiologist who was freaked out that AI was replacing them because it could read scans faster and with the same accuracy. There will *always* be positions to double check AI. It's just not smart enough to be left to its own devices, and it never will be simply because it's a liability. What if it misses something blatant on a scan or hallucinates a diagnosis? That's a major lawsuit for the medical practice. Instead, what will likely happen, is they'll use AI to speed up detection/get through more scans a day, but they'll have to be final-checked by a radiologist to ensure accuracy. I see this in every industry. You'll have AI that builds code, software engineers to double check that code. AI to automate factories, and employees to watch the automation and step in if there's a problem. And so on. This is just another tech bubble over something exciting and scary. It won't last. Tech is a very volatile industry unfortunately, but that's because of its constant evolution. But to the same tune, because of the constant evolution, it will always bounce back.
reddit AI Jobs 1763560885.0 ♥ 10
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyliability
Emotionfear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_npo4adb","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"rdc_npo2n0k","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_nppa6i7","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"rdc_nppkx7f","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"rdc_npq3ypv","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"} ]