Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This matches what I see on my team too. The gap isn't "can you use AI" - basically everyone can prompt their way to working code now. The gap is whether you know what questions to ask the code before shipping it. Your colab guy couldn't tell you what NMS was doing in his own pipeline. That's the tell. He wasn't debugging, he was regenerating. Every time something broke he'd paste the error back in and hope for different output. No mental model of what the code was supposed to do in the first place. The people on my team who are actually dangerous with AI are the ones who already understood the domain cold. They use it to skip the boring parts (boilerplate, data loading, config files) and spend their time on the parts that actually matter - validating assumptions, checking edge cases, reading the loss curve instead of trusting a number. AI made them 2-3x faster at the stuff they were already good at. The scary part for hiring though - you can't tell the difference in a 45 minute interview anymore. Both types can talk through a solution. You only find out which one you hired about 3 weeks in when something breaks and one person debugs it while the other one just keeps regenerating.
reddit AI Jobs 1773497875.0 ♥ 9
Coding Result
DimensionValue
Responsibilityuser
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_oae70hu","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"rdc_oadnt27","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"rdc_oael5l2","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_oaen2gi","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"rdc_oaft6g2","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"} ]