Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think that there's a bad toupee problem, and most people aren't noticing the sites that are mostly made by AI because they don't announce themselves as such. I do have to make SOME high level decisions, but if I tell Claude to follow a best practices document and plan out work in structured documents ahead of actually executing it I get much better results than when I was just prompting. It does keep breaking things, but the pace at which it goes ten steps forward and one step back is worth it IMO. Some of the structured plans it's making are over 2k lines (broken up into smaller files so as to not overwhelm the contexts of agents working on particular tasks). I'm having it work on a project with over 3k source code files right now. I feel like there is a bit of high level understanding that is helpful, but the main barriers of entry right now are being able to navigate a terminal and understanding what kind of context to give it... and it can write a lot of that context itself. My current best practices document was written by Claude and the only real input I gave it is that it should never use git (it previously made modules that resulted in me not commiting all important changes).
youtube AI Jobs 2025-12-16T02:4… ♥ 1
Coding Result
DimensionValue
Responsibilityuser
Reasoningconsequentialist
Policynone
Emotionapproval
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugw3Z5Kc-k_VwdeXtbd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwY2euVpQRUl_MDnmp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugzwu7hg2fw2cLo9LqF4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwugnyApxPmAivZ61l4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwY01MT7UVAGHuSCQd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxLr2LZzXQqsmli1oh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyZrX2pEroJY-RJeVF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugw5My7YchftwOrhYkR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugwv_RZhWtZ0yxdnvLV4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugyp8fmbF7rOfw-wRbZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"} ]