Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
My take on this: - The first decade of AI will basically spit up code, increasingly better one. But the thing is, SOFTWARE ENGINEERS - not just jimmy the bootcamp guy that was previously a waiter that wanted to surf on the dev bubble - are supposed to come up with solutions to problems using software. This goes with breaking problems into manageable pieces + using current IT technology to help you in solving them. This will be the actual human work to be done for now.. I believe the market will shift towards humans knowing e.g., WHY use architecture X vs Y, than humans knowing how inheritance works in javascript or what are the best libraries for idk... csv parsing.. Towards employees knowing deeply why use REST or gRPC. I think this will be the actual human work for the next few years. - 10+ years from now: AI will probably have reached a point of ACTUAL intelligence in the sense that it will THINK and CREATE and make rational decisions, design, optimize for cost or time, etc etc.. much better than us humans.. But I believe at this point even the jobs that are pretty safe now, like phD research and knowledge expansion will just be doomed.. After this deadline IMO there will be basically NO BRAIN WORK left for us to do, really. I might be wrong on the timeframe.. maybe 15 or 20 years.. but thats basically it. I truly dont know what humans will do for high-income work after that. Maybe things that are good only because theyre full of human flaws, like sports, fighting, etc..
youtube AI Jobs 2023-12-06T05:1… ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwVpLFzP6gOAmhZrx54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugza4O_VKZBFdiBVUcN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwAYm5XN5vihz5VHst4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgydF81i3HftB76Rikp4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwqtUQTJCybgBBClWJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugyz9e7QaM4XfqnzFfx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgyAmULBtYgX1Cprhu54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwEtBaC8NWNrdrcYTR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxIvfOx8Zxyquvg3Yl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgykJqZqwv82AWSLI9V4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"} ]