Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
As a recent graduate in software engineering (3 years ago) I have not been able to get an interview in the industry, much less a job. Even before I graduated I saw where things were going with the AI industry, I think many of us did. Companies held onto senior engineers and let the juniors go. Never once thinking, what happens when the senior devs retire? If there's a problem, you're going to need people in junior and mid level and even senior roles to go in and fix problems, bring creativity to the table. The way the tech industry is now, you have to fight or trick AI into getting your resume in front of anyone, 98% of resume's are just binned by the tech immediately. ANd even if you do somehow get an interview, you are gatekept by AI having you do tests in which the only answer is the way 'they conceived the solution'. SO there's only one way to do it and on a strict 45-minute time table. I've talked to senior engineers who can't get past that gate because humans come up with diverse solutions, not a cookie cutter one size fits all. IF you name your variables 'wrong' (according to the ai) [for example you use int b instead on int a] you go in the bin. The logical conclusion is that code that does get written isn't elegant, it isn't robust, its cookie cutter and the wonderful insights that lead to innovation are lost. This also makes code easier to hack and forget complex interaction. Bottom line, there's no junior engineers to come in and learn and grow and bring the next bit of amazing to the market. There's no mid levels, so no one to run the teams and help come up with solutions, and the senior devs are retiring eventually so that knowledgebase is diminishing by the day. I'm hoping that the industry, with stories like this, wises up and starts a massive junior developer hire... because the industry is heading towards a global crash out they can't fix and most of us software engineers, we know it. Hopefully this will turn around soon, cause I see only bad things if it doesn't. The world has become too dependent on computers to just ignore the problem, and AI *mark my words* will never be up to the task the way humans are. Are they good to help, yes. But provide the solutions, be creative, figure out the unquantifiable? NO. And worst, AI is created in such a way that understanding it at the moment is impossible. We can build it but we can't predict it... and that is a danger we shouldn't be ignoring. Even the senior devs have no idea how AI does much of what it does... its called the black box problem. And as people retire... who is going to go in and even have a shot at figuring it out? No one. So, those are my thoughts. By the by I've written AI, and I voiced these concerns in class, and I was told shockingly that its not my concern to be worried about how it works. I built the AI... it worked... and it scares the hell out of me that it does.
youtube AI Jobs 2026-02-07T14:2…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningdeontological
Policyliability
Emotionresignation
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugx8YE5TveswSzEILsl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugx6CkLEYk82a82cPPV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgykkbcoUTL-nJyJIEN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"disgust"}, {"id":"ytc_UgyQXbJaOeEBkeZl9Xp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgyPG21RHeSiTJ2wyRN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgyCVNt-k-7BTgV6K6p4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzSF6jcMoHhvM-Q7w94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzV3dActtVedrqPNVF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"resignation"}, {"id":"ytc_UgymDL6pqtz8Mo6c6vt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy8mxe_lNTlqvEUhct4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"} ]