Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The number one reason AI won't replace us is trust. I use AI as a tool in my job when it's useful. One thing I've thought about trying is an AI that turns verbal instructions into command line statements. I'm reasonably fluent in bash, but when it comes to more complex stuff, I have to look it up. I think an AI would be a lot faster at getting done some minor one-time file maintenance task. But how do I know the AI won't accidentally do "rm -rf /"? So, I at least have to check the output of the AI before I hit return. The same applies to anything we do. I don't think companies will want to deploy customer-facing code generated by AI without having an expert check it. But it can make engineers more efficient, which means maybe you need fewer engineers. Or you can do more.
reddit AI Jobs 1709832831.0 ♥ 3
Coding Result
DimensionValue
Responsibilityuser
Reasoningconsequentialist
Policyindustry_self
Emotionapproval
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_ktsox7r","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"rdc_ktsyhs6","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"rdc_ktsl9yw","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"rdc_ktt4wug","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"rdc_ktt76vy","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"fear"} ]