Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
When people talk about the future of superintelligence, it’s usually framed as progress. But the truth will be far darker. If machines replace human labor, the majority, the 99%, will no longer be seen as contributors but as liabilities. The CEOs and power brokers already shaping this world often view themselves as rulers and the rest as expendable. With total control of resources and technology, they would not allow massive populations to remain, because those populations could rebel. History shows that “excess” people are never sustained out of compassion they are managed, contained, or reduced. And there’s another danger: AI reflects the values of its creators. A biased coder can design a biased machine. Prejudice, once written into algorithms, doesn’t just replicate, it scales. What begins as one person’s discrimination becomes automated, global, and brutally efficient. So the real threat of superintelligence isn’t the machines themselves, but who builds them, what values they encode, and how those values serve entrenched power. The real question isn’t “how will humanity exist with superintelligence?” It’s whether we will see through the theater before the final act is written for us.
youtube AI Governance 2025-09-07T04:0…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningdeontological
Policyregulate
Emotionoutrage
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxvM6nioVe9lKde6-l4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugxl_moM7ZThHqcpBnR4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgwJkqsd9mVcNW70zjB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugz6ZGJI849_LKKyWjF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugz8J8BYO7r-gryvz_Z4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz0XWdQEahnsk-Va6N4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyrdRKJ-1-raMAn4PR4AaABAg","responsibility":"none","reasoning":"virtue","policy":"ban","emotion":"fear"}, {"id":"ytc_UgzW5DSLsUlTfJF8w9J4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwpQmuYEA7xpu7yXax4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgxQIyp6dNM5jMGV5rZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"} ]