Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
For things like generating a motion or a piece of software. AI works well when it works. The problem is you can't trust it. You have to reveiw its output. And if it's complex you may need to verify it's logic and sources. If it made serious mistakes, and it does often, you can either mindlessly regenerate and hope not to find new errors, or you need to dig in and do some of that work by hand. There are cases where output is so complex and wrong, it's faster for me to build something from scratch that I understand. It's honestly like trying to salvage shitty work from a coworker who can't explain what they did. It can be faster and more useful to do it yourself. Also our context awarness is much larger than the AIs when it comes to understanding the problem we are trying to solve. Leading to iterative improvements it doesn't know to look for. For now I think it is much safer to use AI in small constrained iterations to speed things up. It can make too much of a mess in seconds to use for serious work unless you plan to commit to vibeing out a solution.
reddit AI Jobs 1753637124.0 ♥ 3
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[{"id":"rdc_n5kf4r9","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"approval"}, {"id":"rdc_n5go5n9","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"}, {"id":"rdc_n5gkd2g","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"rdc_n5gric4","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"resignation"}, {"id":"rdc_n5gyneq","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"})