Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
My LLMs can't write simple unit tests, or even update existing unit tests. They can't handle enums at all, period. They always hallucinate enumerations that don't exist based on the surrounding code, the imagine what they think the enumeration names *should be*, instead of looking up the enum definitions to read actual enumeration names. And the last time I asked Claude to update a unit test, "here's the command to run the test, keep going until it passes", it literally deleted the lines of code that set the test conditions so it was impossible to fail and declared it was done. LLMs aren't replacing me anytime soon. And more importantly, I have never once received a complete or perfectly accurate JIRA ticket.
reddit AI Jobs 1752766567.0 ♥ 2
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionoutrage
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[{"id":"rdc_n3mi7i6","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},{"id":"rdc_n3mnkh9","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},{"id":"rdc_n3ncmq7","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"},{"id":"rdc_n3p0y5z","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"rdc_n3q1cwa","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}]