Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
12 years managing engineering and DS teams in fintech. this is the sharpest version of this problem i've seen posted here. the debugging muscle isn't just about sitting with broken code. it's about building a mental model of how your system actually works under failure conditions, not just the happy path. that's what lets senior engineers diagnose things fast. they've internalized the failure modes. what i've started doing: when a bug comes in, i don't let juniors re-prompt the AI to fix it. i make them draw out what they think the data flow looks like before touching the code. half the time they can't, and that's the actual gap. once they can trace the flow manually the AI becomes a tool again instead of a crutch. the scarier version of this is the mid-level engineer who shipped with AI for two years and never built that model at all. at least juniors know they're junior. a mid with years of AI-assisted shipping who can't debug under pressure is a much harder conversation to have.
reddit AI Jobs 1777049798.0 ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionapproval
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_oi1mqfs","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"rdc_ohzq3g3","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"rdc_ohztznw","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"rdc_oi0nhb7","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_oi004vl","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"} ]