Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
What this ignores is that you still need to know the syntax to review code. AI won't make the error of forgetting a smicolon, bit it will use outdated and insecure code structures. You need to know how to do it better to use AI effectively, and learning how to do it better, you first have to learn how to do it at all. So this applies to senior techs sure, but you still need junior devs to actually do the boring, repetitive work of writing syntax until they internalized it enough to see when AI makes up crap. And I am saying that as a Junior dev. You need to understand Code before you can check code. Additionally, AI will always write samey code. It will never, literally never, come up with something truly novel. That isn't needed most of the time, but that few times, you need human developers that can realize potential solutions from documentation, not from examples. So sure, for senior devs, AI is a huge boost, for junior devs it's a danger, because it can and will create bad code that you don't know how to clean up if you don't know how to actually write the syntax.
youtube AI Jobs 2025-12-21T13:4…
Coding Result
DimensionValue
Responsibilityuser
Reasoningdeontological
Policyunclear
Emotionmixed
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[{"id":"ytc_Ugyfx6js0YmcbjLzzYJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugz5qxIMrsbtYRklnqR4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"industry_self","emotion":"indifference"}, {"id":"ytc_Ugz2JDjut0hp6w_w-LJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytc_UgwMTWeSnEBtQzHXEjB4AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"resignation"}, {"id":"ytc_Ugx2YmLlvNRryIQn-rd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugxr1O3edhTtdnX-Lnl4AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwTpf-hFAYYNkGDoXF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugz4MvH02ho-m67orrB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxHHM5b3WFBHrZq8eJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}, {"id":"ytc_Ugwb6lKEdfpPDc9LCLV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}]