Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Everyone is getting squeezed by extraction. Across politics and income levels, people see rising costs (housing, food, healthcare, education, basic services) while wages lag. Fees, shrinkflation, and junk charges make it feel like the system is designed to drain them. Surveillance pricing and pricing based on a shoppers personal information and circumstance is now standard practice. Right now, people are losing their jobs and can't find new ones. AI and automation = job elimination. Businesses adopt AI and automation primarily to reduce headcount and costs. Whatever new roles appear, the dominant effect is fewer human workers. We already trust AI-driven automation for surgery, using interstellar telescopes, flying planes, driving vehicles, calling balls and strikes, running factories and warehouses, and managing large-scale farms, producing and distributing web content and music, etc.. Given that, it’s not credible to pretend AI/Automation job displacement will stop at email, billing, HR, or basic coding. The time frame is unclear, but it’s realistic to assume that 90%+ of today's wage work across all sectors is automatable. Mass displacement is inevitable. The only question is how long it will take and when it will happen. The U.S. has already committed to a Sovereign Wealth Fund and national AI mission. President Trump signed Executive Orders to explore a U.S. Sovereign Wealth Fund and establish a national AI program. The Sovereign Wealth Fund Executive Order (Feb 2025) states that we are going to design a U.S. Sovereign Wealth Fund to give citizens long-term security and stabilize the federal balance sheet, and the Genesis Mission EO (Nov 2025) states we are going to make a national-scale AI investment on a Manhattan Project timeline. Subsequently, hundreds of trillions will flow into AI infrastructure. Over coming decades, vast capital will be lent and invested into AI data centers, chips, power, and networks—exactly the kind of long-term cashflow assets sovereign wealth funds normally own. We already bail out banks and markets with trillions. In major crises, the U.S. routinely creates or mobilizes trillions to stabilize financial institutions and asset prices, and everyone expects this to happen again in future crashes. Meanwhile, predatory financial friction drains trillions from households. Overdraft fees, late fees, payday loans, high-interest cards, and “convenience” fees collectively move enormous sums from ordinary people into financial profits and market caps. Global Household Debt total is up to $60T, the global debt market is now over $300T. We know some simple economic truths: Trust-fund math works. A kid with a $1,000,000 invested at 7% annual return ends up with many millions over a lifetime, plus steady income from the yield. Compounding + time = security. Bigger risk pools are safer. Social Security, pensions, big insurance pools, and index funds all rely on the same principle: large, diversified pools reduce individual risk. We also know that we already finance huge things with future payments. Mortgages, student loans, municipal and Treasury bonds, corporate bonds, and derivatives all turn future cashflows into present capital. This is standard global finance. Large institutions like Tencent, BlackRock, or the State Bank of India already manage trillions of dollars in assets, for hundreds of millions of customers, often using these exact trust fund and futures mechanics. Given these truths, how is the United States supposed to compete and lead in the global AI marketplace, where hundreds of trillions will be deployed into AI infrastructure over the next 25-50 years, if it enters this era with a heavily indebted national balance sheet, recurring multi-trillion-dollar bailouts, large-scale AI-driven job loss at home, and foreign sovereign funds and oligarchic capital owning most of the core AI infrastructure? The answer, is to fund the Sovereign Wealth Fund using this simple standard economic funding principle: Start with basic Trust Fund Kid math: Invest in each person $1M x 7% ($70K/yr) at birth Yield pays a monthly dividend (25% <18, 50% >18) Yield Pays annual tax burden 25% Principal compounds lifetime (50% <18, 25% >18) to $4M-$5M Everyone gets an untouchable SWF Genesis Bond Trust Account pro-rated back to their date of birth or the day they are born. That fund itself is untouchable. When the person dies, 50% goes back into the fund and 50% goes into their heirs' funds. 100% re-capture. The funding source is the credit spread and fee layer we already pay—captured at national scale and returned as citizen yield and the return on the AI buildout lending. Add a simplified 10% other income and 2% consumption tax to fund the base level operations of the government. 330M people/micro-corps create a giant risk mitigation pool, funded by lifetime bonds collateralized by the future AI infrastructure build out. Every American citizen owns a piece of the Genesis Mission National AI stack. Simple trust fund mechanics, standard collateralization of the future AI build out, no money printing, no giveaways, no socialism. Micro-corps, engaged in macro-capitalism. This creates a life time treasury bond fund for every citizen that provides yhrm with a living wage monthly dividend and a secured line of credit, pays all of each persons tax burden, eliminates all entitlement programs, funds a Federal budget surplus in the tens of trillions, funds US domination in the global financial and AI markets, and prevents the AI and Automation/Credit Default Crisis with the safest possible structure. It's the biggest possible risk mitigation pool, creates the infrastructure build that provides jobs to bridge the transition, shares the wealth built by our tax dollars equitably, and it grows exponentially over time. The whole plan is here: https://www.amazon.com/dp/B0G396Y1BN
youtube AI Governance 2025-12-29T12:0…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyunclear
Emotionoutrage
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgzQM4iE-QHVAi21OVJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgwBrUIZNGOAkY-jTIN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwKZk9VAylQkAnXkhJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwC33HQmQ9Z5l_xIeR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"unclear"}, {"id":"ytc_UgzkpiccQZ3aFwFsMfh4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgwWjMtE9VKLY4ESgGV4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_Ugww60QZX9S0pODScpB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgxpB32ykcCS2cTVzTl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgxW5QA4KjoPhCGV9RF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxC01rF9uL9EgBO8JB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"} ]