Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
It’s very useful and also extremely overhyped. Reddit has a way of having 2 extreme view points along AI coding with one acting like it’s garbage and will never accomplish anything mainly do to fear, and the other acting like we’re already dealing with AGI and it can do anything already or in a very short time from now (usually brought on by being uninformed or buying in propaganda) The truth is that it’s extremely useful, I use it near daily and it’s even helped me solve some issues I may have been unable to get past myself and it does speed up my dev process immensely. The downside is that hallucinations are still a huge issue, it goes on tangents at times, it struggles to keep contexts, its struggles to ask for contexts, it struggles to follow direction at times, and a host of other issues. Not to mention a good chunk of the software engineering process isn’t writing code, which is most of what AI is actually good at currently. I maybe only write code for a solid 30-45 minutes a day, outside of that is a bunch of other things my job entails.
reddit AI Jobs 1743792245.0 ♥ 6
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionmixed
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[{"id":"rdc_mle7w93","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"rdc_mle5rfl","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},{"id":"rdc_mle6l7f","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"rdc_mle7tmh","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},{"id":"rdc_mlf3fmz","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}]