Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
i more or less agree that we'll at least need some programmers to at least look at the code that AI build, even if we can also automated code-checking with ai: we don't want to live in a world of black boxes and blind trust. we have to build the world we want to live in. but in 2024 the next generation of LLM will be a wild step not only in the number of of neurons but also in capability: memory, logic, optimized knowledge, automation, etc. also remember that ai RIGHT NOW understand everything you said in this video better than 99.9% of humans since it knows EVERYTHING: every word ever said or written, every problem ever documented no matter how rare and esoteric, etc. and ppl are working now in augmenting those ai by creating whole enterprise of ai agents with roles and personalities so that when you ask a programming question there i can literally be a million "employees" answering your question and debugging it. and they will be much better than any engineers with ANY amount of knowledge. so the next ai in 2024 will basically be AGi and do everything. programmers for the next 2-5 years will become prompt engineers and probably it's their managers that will lose their jobs firsts since the programmer will take it (managers often can't program or verify code). and maybe 50-90% of coders in jobs today will lose their current job since ai can do the job of an entire team. that being said every one will want code and most ppl won't want to talk to the ai about this and hire those coders... anyway 2024 will be insanely wild. not only for programmers but ALL knowledge jobs are gonna freak out.
youtube AI Jobs 2024-01-18T13:1…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyregulate
Emotionapproval
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgzgvZ5RxlwbomicsXJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwDDd3oEJ_4OaVc3bJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyaG6Utd1Yub6aGd7p4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgwAIcn0ln-sEqqcShx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugxz1LgFYZh8ce7rq994AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"regret"}, {"id":"ytc_UgzuaW6lXf_6PllWaGZ4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwaezYnd3E_tn9NVNh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgzBMvhQ9Zy_ifIOc694AaABAg","responsibility":"media","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgwzFjZkFqWMukQQIZl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgzmQKqrjOBhRsG5U2p4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"} ]