Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
100% this, we use it but AI building integrated business features always fails, it leaves things out, hallucinates new things, and changes things you don't ask it to. Sometimes, it does what you want. Specific enough prompts, proper context, versioning details can help syntax and whatnot, but business logic just seems to get lost in the sauce. An example I give, is go 1 step beyond a simple TODO app and implement an actual CRUD app with any sort of logic or functionality to it, you immediately start to see the flaws with AI development, it does not know what problem you want to solve, it just sees patterns others have solved, and oftentimes regurgitates that expecting you to be satisfied. This is why anyone who says vibe coding is the future should be laughed out of the room, without being able to determine if what you wanted was the right answer, meaning you already knew exactly what you wanted the AI to do, you cannot be sure what it produced is what you wanted, exactly. But at that point it really has become just a helper, something to automate the entry of the idea into code. When that works, I can turn an hour-long task into a 10 minute task, but I still spend an exorbitant amount of time validating the response is satisfactory. Frankly from novel development standpoint AI is useless to me, however it's most useful when automating scaling tasks. Feels like GPU vs CPU.
reddit AI Jobs 1745589463.0 ♥ 2
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionresignation
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[{"id":"rdc_moyq8se","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"rdc_moywep5","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"rdc_moyyswy","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"rdc_moz01sr","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"rdc_mozg8nh","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}]