Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
For this theory to be correct, however, companies would have to continually want at least 35% more features/code in their codebase (since the avg. developer using AI is 35% more productive), for an infinite amount of time. That is quite possibly one of the most ridiculous arguments I've ever heard, as much as I would like to believe it. The fact is, if the average developer is 35% more productive, and at some point companies don't need 35% more features/code, then the logical conclusion is that they will need 35% less workers on their software dev team. For a company that has 10 devs, that's not TOO big of a deal, that's 3.5 people gone from that company. But for a company with 1000 devs, that's 350 people gone; 10,000 devs, 3,500 people gone. Then multiply those numbers by all the total devs in the world. It means 35% less workers (globally) in the software development space. It is estimated that there are roughly 30 million professional developers, globally. 35% of 30 million = 10.5 million people. It may not happen all at once, but that is (over time) a whole lot of people without jobs. For the sake of my own optimism, I hope someone can come up with an intelligent rebuttal...
youtube AI Jobs 2025-12-24T21:3…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgwL_9Gsl9OFYjpiNEZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgymH4Qn2mvOUffZgOR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugzv8RcvzHmnjFBUlLt4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzrAuAg2j0ZSajVlXt4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"indifference"}, {"id":"ytc_Ugwdy3-2hkTUzYJzNjB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgzRLAGiat1C5fBQAHR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugzs3--T3oB5JC6ci3V4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyS4C6cYWQUJdY5WnN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy1FqkBqCD2vhz1-V94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyiQMKFV1VVaKQjJRd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"} ]