Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Here's what bothers me about the AI trajectory we're on: there's a fundamental economic paradox nobody wants to address. If automation eliminates most jobs, who's buying anything? The whole system depends on workers being consumers. The wealthy can't extract value from a population with no purchasing power. You could point to UBI as the solution, but that's a shortcut to real thinking. History shows us that centrally planned resource distribution doesn't work. Do I need to remind everyone how Communism went. You can't sustainably allocate finite resources through bureaucratic formulas without running into the same problems that plagued every command economy we've tried. Human incentives don't bend to tidy theoretical models. The real issue is that AI fundamentally breaks capitalism's core mechanism: the exchange of labour for wages that fuel consumption. We're not talking about tweaking the system, we're talking about its foundational logic becoming obsolete. That requires completely rethinking our economic structure, yet we're barreling forward without any coherent plan for what comes next. What frustrates me most is the inevitability of it all. This transformation is happening whether we've thought it through or not, whether we're ready or not. I'm old enough to remember life before the internet, before everyone had a computer in their pocket. There was something valuable about that world we've lost. Now we're about to make an even more dramatic leap, and I can't shake the feeling we're not prepared for what we're giving up.
youtube AI Governance 2025-12-11T01:2…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzLaYMnzbpQaXPV7g54AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugz5z_Yg7AsBOZeZhih4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzcZEpd0EdO7X2z1114AaABAg","responsibility":"company","reasoning":"deontological","policy":"industry_self","emotion":"indifference"}, {"id":"ytc_UgxyD_oaV2YtjV5kvap4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugyp3aTu5sVIOqXCoDN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugxgog57TM0Kv23JzU14AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}, {"id":"ytc_Ugzp8MlnwiJrrQAPdpR4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugz2WKZcpiCPdDZ5_Ld4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugz97yKBhSVU4FsK7Kp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgzdS6jG4aqWc9EV03t4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"} ]