Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
It’s easy. Push out a bunch of ai slop, and then while the suckers are reviewing it, actually write the thing you need with zero help from AI. They’ll either accept the slop, which means you can send another slop PR, and then you continue doing the first thing you did without AI. You’ll use your tokens but if you actually do it from scratch you’ll learn. Then when shit breaks, back out your original PR and add the hand coded one. Then you’ll have code you understand that’s running. It’s a very delicate balance though. I would try to write at least one PR from scratch without ai per week, and test it and make sure it works and stuff to the maximum extent possible, to get that deep learning. It’s super easy to game the AI stuff though, just game the fuck out of it. Have an agent just in a loop “improve the code”.  This is so bad because this is just really weird abuse of what’s known as Goodhart law [xkcd.com/2899](xkcd.com/2899) and the metric is “ai usage”. Basically game the system and force yourself to code without AI. Good luck
reddit AI Jobs 1777062054.0 ♥ 1
Coding Result
DimensionValue
Responsibilityuser
Reasoningvirtue
Policynone
Emotionmixed
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[{"id":"rdc_oi1tqnp","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"rdc_oi1xidx","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"rdc_oi2u5x8","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"rdc_oi3g3oj","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"rdc_oi42y9y","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"}]