Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Surely the greatest argument against industrial AI (as opposed to pure research) is the very reason for it, "to replace human labour." Venture capital wants to eliminate workers. No real ideas about how those humans will live, thats an "externality." AI is only a small part of the problem, right now, today, Sam Altman and all the other tech CEOs want to eliminate humans. Yes, from the workforce, not life, but without care for how those humans _eliminated_ from the workforce will make an income, feed their families, house or clothe them, afford medical care. UBI? In the last 40 years welfare programs have shrunk because the middle and upper classes literally hate welfare and that human bias is coded into AI by the questions humans ask and by the data scraped from human posts on the internet. MAGA is providing as much, maybe more, data for training models as Amnesty, Greenpeace or Medecine San Frontieres. Venture capital, especially in the tech sector is the danger hiding behind AI more than AI itself and AI itself, if it becomes truly sentient and superior, is a serious enough threat. If thoseself appointed "prometheans" steal the "fire" for themselves, they'll be the ones ordering AI to destroy the rest of us. AI is not the real threat yet, it's the privilege of those tech bros, their arrogance, the belief in accellerationism. That's the primary threat, the AI is, for now, simply one of their weapons.
youtube AI Responsibility 2025-06-16T01:1…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyregulate
Emotionoutrage
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugw0qW1N53tKOF-n7NF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwxlJ7_2zStKBZtiEN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgwYC49BZiB_2_M3nLZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"unclear"}, {"id":"ytc_UgxZLSp4Wtf7733v8w94AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwBQZAgkvb_WOVQUU14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzdI9EZKggfw2GoV514AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzbqtUZ-6xJSmtCGOZ4AaABAg","responsibility":"none","reasoning":"deontological","policy":"regulate","emotion":"mixed"}, {"id":"ytc_Ugw918wQ5ssZSsvdzah4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwzfJmpW5j93_PI4k14AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgyND_4_XgNDl3_eUxN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"} ]