Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
From my experience, here are some thoughts. 1. AI is a tool, just like a hammer. It's not a one size fit's all magic machine that does everything perfectly. You wouldn't go and build an entire house with only a hammer and neither should you go and build an entire application with only AI. You still need a responsible engineer to have the final say. However when used properly, it can be a great accelerator. 2. The model you choose can have a great bearing on your user experience. This changes rapidly in this "golden age" of AI development. However, as of writing this, I've found the Claude Sonnet 4.5 model most useful. 3. The "buck" still stops with the engineer. They are still ultimately responsible for the code they commit and cannot point to AI and say the AI did this. Again, the AI is a tool, just like your IDE, linting, plugins, etc. Use it wisely, know how to use it to best effect, and most of all know when not to use it. 4. Inherently understand what the AI is doing. Sometimes AI can run you in circles or down the wrong path. Give it a little leeway, to try to hunt and find answers, write some code, but don't let it out of your eye sight. If it starts going down the wrong path, stop, take over, and point it in the right direction. YOU are the engineer. Some of the issues with AI From the top, they see a demo and think great, I can eliminate this much cost overhead without thinking about the quality drop inherent in a non-deteministic system (kind of like us). From the bottom, you lose built up system knowledge that AI just doesn't and can't have. People who have experienced nuance over the years they've been there and have memorized those patterns as being unique to the engineers who have come before them. Then when something comes up, they know that they've seen that before, what it is, and how to fix it. When you eliminate all those people, you also eliminate the layers and layers of quality built into the SDLC process. The engineers test their work. The QAs test the engineers work. Product tests the QA deliveries. Managers test the release. The releases are tested in each environment. Etc. Do the AI care about all of the requirements? Not just the customer requirements in the story, but logging, auditing, compliance, legal, etc. Stuff that people know is needed even though stories might not spell it out. AI can have blind spots, even though it will see them if you point them out. I just had an issue where it wrote tests that would write multiple files to disk with the same name and only case sensitivity to differentiate them. The test should test a file mask for case insensitivity. However, AI didn't consider that this was a Windows application and the file system couldn't support multiple files with the same name only separated by case sensitivity. Therefore the same file would get written over and over, ultimately creating 1 file instead of 3. It was convinced that disk commit latency was the issue. I mean I'm on a really fast system, but come on... not likely. I finally had to stop, debug the test myself, and tell it what the problem actually was. Then it immediately realized the issue, fixed the test, and we proceeded.
youtube AI Jobs 2026-01-20T14:0…
Coding Result
DimensionValue
Responsibilityuser
Reasoningdeontological
Policynone
Emotionapproval
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgwHdnBkADgXn40WusV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwsyY0f1cnWtZ98gRR4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_UgzlATi-qooH1JTzzO54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxmpdCzVX3XUUviNcl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgyFay6Mn_5pPJWKTat4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxVnVLUJo9u5s7s8R54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugxam7-Bhf0b0fblqY14AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxxW_k-EAzqVJtC8Fl4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxfEDcdHMHUZ9wrvvB4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"disapproval"}, {"id":"ytc_Ugy33SwyO7IjUx90JOB4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"} ]