Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I use common stacks with very common use cases and Claude has been a pain in the ass. I build out a plan, let it go, it creates a ton of code, and in the end, the product doesn't work. So now I'm either burning tokens to let it go in circles until it figures it out, or I have to fix it myself. I've heavily invested my time into using these tools since ChatGPT 3.5, so several years now, and it doesn't feel like much has changed. Sure, it solves a couple more bugs, can do a bit more correct work, but the end result is always a broken product that I have to fix. In the end, I'm in a better spot if I just don't let Claude write any of the code and only use it as Google.
reddit AI Jobs 1767635935.0 ♥ 1
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionoutrage
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[{"id":"rdc_mle5pcu","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"rdc_mmc72p4","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"rdc_mle6efo","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_mlflhnp","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"rdc_nxutfzk","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"}]