Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The Coming Automation Crisis: A Warning Dr. Roman Yampolskiy claims that by 2030, only five jobs will remain, with AI and robotics eliminating ninety-nine percent of human employment. Headlines scream, podcasts echo, business outlets amplify the panic. Even flawless AI and humanoid robots would take decades to saturate a complex economy. Technology does not flip a switch. Electricity, the internet, smartphones—they all took generations. Hospitals, factories, schools, airports cannot rebuild overnight. Half of America’s infrastructure is already crumbling. I myself am militantly retired. I do not seek a job under American Style Capitalism. The system will automate regardless of human needs, calling it efficiency and wealth. Retraining programs are theater. Whole industries vanish. Humans trained to manage AI will be replaced by AI itself. There is no time to retrain millions and the question remaining is retrain for what? For those in power, the real threat is legitimacy. Crowds, not machines, decide the fate of elites—a modern Ceaușescu moment. When people see the show is a lie and have nothing left to lose, power is fragile. Job loss alone is secondary; the collapse of legitimacy is fatal. Resistance begins with clarity. Automation under capitalism is not neutral. Efficiency is a weapon. Refusing the treadmill of employment, even while living your life, embodies a quiet defiance no algorithm can replicate.
youtube AI Governance 2025-09-06T15:2…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyregulate
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxYEp0DUfBg1DDYyQx4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugx2e9Z6V99tuWRLUbZ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugyf2n81X9wDPzgXCEt4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_UgxXDZiBTLk_yGnXwsV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzAhwjn7MMsIfl2oS54AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyZAOB_Swm3wfFeget4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzIAnZbhmMTWFkKtyd4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"}, {"id":"ytc_Ugzrd8FhB7gx1HmLkN54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyGZCz8f2LN6Zv7vnd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_Ugx4sfxb1ktjreH7THJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"} ]