Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I’ve been writing software for ~25 years. At this point, I don’t manually write code anymore. AI generates 100% of it. But that doesn’t mean I type “build X” and walk away. Knowing how to code doesn't mean knowing how to direct AI to code. There’s a learning curve. I think of it like someone who’s only ridden horses trying to drive a car with no instruction. They’ll say “this thing doesn’t work” and go back to the horse. But once you know how to drive, the car obviously wins. The bottleneck today isn’t coding. It’s research + planning + context. AI still needs a human to: – build the plan – review the plan (it’ll miss a lot) – iterate and add missing context – review the implementation (not just “does it run”, but does it respect real-world constraints) It can be hard to predict all the context an AI needs upfront. It's much easier to give it some %, see what it gets wrong, then walk it back and refine. Over time, your codebase itself becomes context, so earlier mistakes happen less frequently, but new ones show up. Best analogy I’ve found: AI today is like someone who’s read every programming book and seen every example, but hasn’t built real systems, end-to-end, in messy production environments. It knows what code should look like, but not always when to question assumptions or flag risk. So no, I don’t think companies are firing people because AI agents can fully replace them. But yes, this profession is absolutely changing. Will AI replace us eventually? I’m not arrogant enough to say “never.” But right now, the leverage it gives experienced devs is massive and the job is shifting from typing code to architecture, context, and critical review.
youtube AI Jobs 2026-02-13T21:1… ♥ 2
Coding Result
DimensionValue
Responsibilityuser
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgwQIyg_trE2peEouyB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugxv_q1rb-A94wfGIet4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugxg7swDPT5drIdeAnB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugzonec-A8mKpl-W79x4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugz6_eY3McT4d1Wyu-R4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugw45OXekiEoysaB2094AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwnaVMKffrXl0xZbP94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"approval"}, {"id":"ytc_UgxZATPYTFoQ2iKEhwV4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgyVSgrXwsUSxQJqIN94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugwmm25Do_AhDAFZBf14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"} ]