Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
As a software developer with 25 years of experience who is no longer writing code (an AI agent writes it for me) I have a few things to say. 1. AI is an incredible, tremendously useful, almost magical tool. But IT'S JUST A TOOL. It can't replace humans. It can't be held accountable. 2. Vibe coding, in its "strictest" meaning (leave the AI write all the code and blindly accept it) is only good for prototypes and very simple harmless projects. For real-life projects it's "suicidal". 3. Junior developers shouldn't be allowed to use AI tools at all until they are capable of writing reasonably good code by themselves. In the same way that we still teach our kids how to do basic math calculations and how to write by hand despite having calculators and word processors readily available. 4. Honestly, I'm not sure if AI is saving me development time. But that's not the point. AI allows me to focus on the problem I'm trying to solve and the design of the solution I'm trying to implement, instead of the nuances of the code; it's a shift in focus that yields awesome results. And from time to time, it even gives me good ideas or suggestions that I hadn't thought about. 5. If you lose months worth of work because an AI agent wiped your hard drive, it's your fault, not the AI's. You know, there are things like backup copies and source control to get disasters like this covered. What if you hard disk had failed for any reason, or if your computer had caught fire or had fallen to the floor? My company is not replacing developers with AI. Instead, it's encouraging us to learn how to use AI effectively, and paying for whatever AI tools and education we need. And at the end of the day, if we write crappy code it's our fault, regardless of how much AI we use.
youtube AI Jobs 2026-02-05T11:2…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxL8UrjUD1EKJoR0Md4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugzm4NGm9r27FvxzYUF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgycdKCKfZqa25U6e194AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgwdlEcAmNZpqm8wcn94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxQfl03Lo1Ajw-ij-t4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxY5HsyuQDTzwQkpIB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugze-LlMwAxEcAQ4vDt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugz-oEEwfDcIjzbTAUF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugwlt_x0H1fKn3AQzZd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgxerlCvnT_bKt8vigJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"} ]