Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I've been using ai tools for a few years now, and while I agree with most of the ideas of this video I have noticed a shift in the last few months. These tools are now better at understanding overall architecture, goals, bigger pictures and and automated appropriate usage of sub agents. As a developer, to me this was the biggest thing that it was missing and we've done a lot of work to incorporate this into our development stack. The amount of work that it's taken to make these tools effective it's constantly shrinking. Like with any new technology I do believe that the growing pains are being over exaggerated for what it's really going to look like in two or three years. It does make me curious what an entry level developer job will look like when I genuinely do believe that all entry level tasks and assignments will be fully capable of being completed by AI systems. To provide a bit more context, The work I mentioned to get these tools effective, involves a lot of safeguards and micro task distribution to avoid large edits, deletions, and removing the ability for systems to go outside of our control. Well this does require micro steps and approvals it's completely feasible that this style of production line could also be automated with redundant review systems in functionality checking version control. The biggest issue people have today is that they put too much trust in AI managing their code base, but advancements in this alone using the current models would make a massive shift in the developer replacement narrative.
youtube AI Jobs 2026-03-10T17:3…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionapproval
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgyVcQ5cmAL3fb8ij_t4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugzr02reWKGQltTRV_94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxSQcSjCvFd54xhyeV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugx9SZQzLNAffawvmO94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgzsLODuJQJoRxUi85p4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz9waY-DD8Jam4sq4t4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxTCJT-Tu3U48vJzOJ4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyM4SQcbRNZhZkKyQN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugwuf_g9RZT7zCST1E54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxnZ4s_0wysjZ0bi614AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"} ]