Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
When companies are not growing, they need to cut costs. They focus on the essentials, cut any non-essentials, and implement money saving devices. AI can help cut the costs for these companies. When companies are growing in a booming economy, they need to maximize production, and as long as a human+AI combo generates more than just AI (which is definitely the case with today's AI), then companies will hire as many people as possible. So today's AI is a multiplier of the economic situation. HOWEVER, what the AI visionaries are talking about is AGI - a kind of AI that would actually be as good as humans. In that case, AGI would be able to learn all the jobs, including any new jobs being created. This magical unicorn would indeed take over all jobs, existing and future. Companies will finally be free from the cost of human labor, and at first make huge profits - but soon there will be a race to the bottom as all the AGI-enabled companies are forced to compete and drive down their costs to just the minimum required for production. CEOs will get fired because there will no longer be profits. People with some savings will be able to buy up stuff for a while, but sooner or later humans will no longer have money and have to live in a parallel economu where AIs are not welcome... But there's no real plan that tells us how to really make AGI, except the belief that if you make bigger and bigger models, it should happen magically. This is as certain as a lot of past prophecies about how going to the moon would become routine by the 1980s, we would have jetpacks to get to work, we would have unlimited energy from cold fusion. All these ideas are similar to actual achievements, yet the actual distance to them is much greater than imagined.
youtube AI Jobs 2025-06-24T03:4…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyindustry_self
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgyWqEFp5PdmPMdh3YZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwM6kQQH-Tgi3zIXeF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"}, {"id":"ytc_Ugz49bu2oQjqsvo0cp94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzQzg5G26FFfJCpPkt4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugwc1dIqYUcu8Nzk8v14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxjTDW7EDgoNDmAYbR4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgzoQztULa1-9cCGPxJ4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgyBiso4jOY8zQqVQR54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgycohAEV7ir-l3hqUt4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxOkqUo8BzpDM8HUOx4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"} ]