Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
It's funny to see someone with such a large platform saying almost exactly what I have been saying for the last couple of years. I love AI and its potential. But I also know how corporations do business. Trickle-down economics have never worked. There's evidence of that already after decades of trying it. It’s not a matter of can it happen, it literally will. The only thing holding AI back right now is processing power. In about 10 to 20 years, systems will be capable enough that most manufacturing jobs can be replaced by AI-driven automation. Once AI reaches that level, it’s simply a matter of producing the robots on a large scale. After we hit that point in about 10 to 20 years. The next 10 to 20 years that follow, I suspect unemployment rates will rise to dangerous levels, causing serious global economic issues. The loss of jobs won't happen by the millions overnight. Robots take time to build. It'll be a few hundred, then a few thousand here and there. Over a decade or two, well. Not everyone has to lose their job before the problem becomes catastrophic. For reference, the Great Depression hit 24% unemployment, and that was enough to cripple the economy. Based on what I have observed, within 40 years, AI will devastate large sectors of the world and the economy. My estimation is based on the advancement of software alongside hardware. We also have to remember that Moore’s Law is effectively dead due to the physical limits of how small microchips can be made. Add to that the network infrastructure required to run these systems and the data centers needed to support AI’s massive processing demands, and you can see where things are heading. We’ll likely reach that point within the next 20 years. No matter how much AI is hyped, it’s still limited by the hardware required to power it. That said, AI will also be used to help solve those very hardware problems, accelerating its own growth. Not enough to cause exponential growth suddenly, but it'll start assisting and speeding things along. What follows is well, who’s left to buy the products when no one has a job? What happens when the middle class is no longer needed, and even most of the lower to upper class? How do you have a merit-based society or pay system then? (Let's be honest, we know what bigots would do. There are individual people who aren't above committing mass cleansing when they have the power.) There have been countless predictions in the past about where technology would take us. The truth is, there are rarely massive breakthroughs that instantly change everything; it’s real technology and advances are gradual and often limited by manufacturing challenges and costs. We’re now at the point where the biggest barrier to AI isn’t software anymore. That can be overcome through sheer brute force, which, in a sense, is how AI learns. The more powerful the systems, the more simulations they can run, and the faster they can train it. But it's not AI anyone should fear. That has the potential to cure a lot of diseases and other issues. It's the ones who own it. Open source helps on the software side, but someone has to make the hardware and power it. The power demand will also start harming people's ability to survive, as that goes up in cost, too. I'm kind of hopeful for the Star Trek timeline. (Scifi Fan too) Also, AI is already redefining the meaning of consciousness. Ask me for more info, and I can explain some of the things. It's a seriously deep topic. I'm hopeful for the Star Trek timeline. (Scifi Fan too) Also, AI has already redefined the meaning of sentience and consciousness multiple times. It's a deep topic that I like discussing.
youtube AI Jobs 2025-10-08T07:4…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyregulate
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyxRaaqA_sbzj7FWwN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgxWHXNeavmj1m1WuE94AaABAg","responsibility":"user","reasoning":"mixed","policy":"industry_self","emotion":"resignation"}, {"id":"ytc_UgzJF822P3qHi56yag54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugwh7caaWKpp09zprFl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxyJOIMGJJSv1bUO_14AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgygBhZSno1zbf6LKTV4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugyb6-fGbanGAAizALJ4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgzfFc8Cv9-tuExB_Gx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyIvwSCNbqWyM3ywdV4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxApXti9jTWdu4HpuF4AaABAg","responsibility":"company","reasoning":"mixed","policy":"regulate","emotion":"approval"} ]