Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I can attest to this. I’ve tried vibe coding by using it to write a web app in a new language and I experienced the same thing even when I did it myself. Did the app work eventually, yes but at a cost. It’s difficult to maintain, in my case, because AI throws the concept of decoupling out door because it over prioritizes time complexity, which then is hard to spot since I’m unfamiliar with the syntax of this language. This led to many bugs and rewrites over time as AI kept saying “oh the real issue is “x” let me change it, no wait actually “x” is not the issue as there’s this other piece of context that I just remembered and the issue is actually “y” etc..” Mind you this is a very simple app at this point, scrapes data from one source to store in another as json to be consumed by another process. Overall it’s a nice tool to have but it’s just that a TOOL. And to use the tool properly you have to prompt it properly and have the skill to understand what it spits out because it will never be 100% accurate 100% of the time.
reddit AI Jobs 1773498518.0 ♥ 6
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionresignation
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_oae70hu","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"rdc_oadnt27","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"rdc_oael5l2","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_oaen2gi","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"rdc_oaft6g2","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"} ]