Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Yeah this is the real tradeoff. They’re optimizing for shipping, but skipping the “why does this break” loop. That’s where the mental model comes from. I don’t think the issue is using AI, it’s *how* they use it. If the model writes everything, you never build the chain of reasoning needed to debug. What helped me was forcing a rule: don’t accept code until you can explain it line by line. Also making small changes and seeing what breaks. That feedback loop matters a lot. Tools are fine, but if you don’t slow down sometimes, you end up owning code you don’t understand. Even something like quickly prototyping on Runable is great for speed, but the real skill comes after, when you take it apart and make sense of it.
reddit AI Jobs 1777029172.0
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policynone
Emotionapproval
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_oi1mqfs","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"rdc_ohzq3g3","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"rdc_ohztznw","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"rdc_oi0nhb7","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_oi004vl","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"} ]