Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
AI engineer with a masters degree in AI here: though the premise of this video is kinda right there are some big problems with what you are saying. We have seen a huge decrease in junior positions for software development. AI can objectively outperform any software engineer in terms of quality and speed. That is backed by all major benchmarks and anyone that has spent 5 minutes with codex or claude code knows it. Tye problem is that it is often judged unfairly because of the perfectionism bias that believes that because it is a machine it has to be perfect every time, while a human or even a group of humans would not be able to do it better. It is needed indeed to have a human in the loop, but the tasks that the human has to do have massively evolved (hence the decay in junior positions) many of my colleagues report a drastic surge in resolved issues thanks to claude code. Tasks that took them weeks now take them hours or minutes. Same goes with your tesla example. AI can prevent accidents but people will focus on the one time it failed and not realize that it outperforms humans and a person would make the same mistakes and more. And with vibe coding what you normally have is a problem with how the user addresses the problem. Just like a humans that is given a task, if it is too broad it is going to lack on the details. But if you do it in a granular and progressive manner results are exponentially better. And this can actually be done by AI too. An agentic approach has proven to solve most of the issues that you complain about in this video
youtube AI Jobs 2026-03-22T17:5…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionmixed
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgxmipkYcG0V1B-7mAh4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyqDLM_eZAdE_jFI4Z4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugy6qTjctdRjSf4WfH14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzMxYo5wgqXlo1iMdJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwlqkFh6XO2IyI3SA94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzYO-T9WNL5lAqezNx4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_Ugx_6MJjkl9BJM6v8WV4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyRncR6RNC1mEGJUn14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwuII6PsnL4NWmPfp14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgxRFRok6yzaxfz7uoZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"} ]