Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I can confidently say that LLM can't replace people using an anecdotal experience. I joined a big tech last year that was in the middle of a huge layoff. I was handed over a project that sounds like something the AI would orgasm over: a lot of slides, initial design docs, system design docs, team docs, evaluation results and a "working" code base - working in the sense that if you feed it something, it spits out something that looks correct. There was a major bug caused from data query that crumbles the whole thing. The previous user used AI. When I fed the relevant docs to Claude 4.6, it happily verifies that the code works. Only after extensive debugging did I find the bug, and only then Claude accepted the error. Admittedly this is caused by data quality issues, but I suspect that this is and will be very common, especially with legacy systems.
reddit AI Jobs 1777077579.0 ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionmixed
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_oi3ygdi","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"rdc_oi3xnwn","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_oi45pz4","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"rdc_oi4ahcb","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"rdc_oi3n0fs","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"resignation"} ]