Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Bernie, that’s not how it works. There’s always a transitional phase, and major advances usually create a wave of new, higher-skill, higher-paying jobs. Yes, “lights-out” robotic factories exist—but they’re rare and mostly limited to stable, narrowly defined processes. Real plants still need people for changeovers, QA, maintenance, and exception handling. Take trucking: we can automate long-haul routes and yard moves, but last-mile delivery remains stubbornly human with current tech. Even if line-haul becomes largely automated, we’ll still need master drivers to supervise systems, technicians to recover and repair vehicles, and engineers to build and maintain the infrastructure. Jobs don’t vanish; they shift up the skill ladder, and legacy skills persist for contingencies like manually recovering a disabled truck. That’s a feature, not a bug—phase out repetitive, low-autonomy work and move people into roles that compound skills and earnings over time. Where I think your view misses the mark is on what policy can do. A negative income tax (a universal, refundable floor that tapers as earnings rise) would let everyone share in AI-driven productivity while preserving work incentives. Pair that with portable training credits, rapid re-employment support, and pro-competition rules so concentrated platforms don’t bottle up the gains—and you get a future where robots cover basic needs while people learn, build, and choose the work they actually want. The goal shouldn’t be keeping humans laboring forever; it should be decoupling basic security from a single job so people can move into better ones as the economy evolves—STEM, skilled trades for automation, field support, safety oversight, and the entirely new categories that always show up after a tech shift. That’s practical, optimistic, and achievable with straightforward, good-faith collaboration across the aisle.
youtube AI Jobs 2025-10-09T00:3…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugz8oDCp3o_ZeeUNpol4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugxzr4FTrgmbXLGKdKJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxEPiF5mzbrPho8EFF4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgwQRtYqnc3iOl_q-ah4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugwt3mTFVUySCpx8sh94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxvcDXgC7-xUx_NCAt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzM392l4VwKRTDApAp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgxVR3lafyXR1cpSMJN4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgwVQvCcpKBtZE0cx8J4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgxtvpZ4Y-EC17H61HR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"} ]