Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The only people using ai to replace artists are the ones who would treat artists…
ytr_UgxVrX0gf…
G
According to my calculation they are robots made by humans organs and skin and y…
ytc_UgzumL6sQ…
G
You did this on purpose. No one like you invents ANYTHING they can't control. It…
ytc_UgwX2E4LD…
G
I saw a different version, where they tell AI to make a billion dollar pancake a…
ytc_UgweIP1z4…
G
I dont think a.i. will replace all of blue collar. Maybe certain aspects of it. …
ytc_UgxjNcHCw…
G
You haven't provided any argument why AI using your images as a reference and in…
ytc_Ugwa7HGYq…
G
0:34 even i notice a distinct difference between the unpoisoned and poisoned dog…
ytc_Ugy7FLcmS…
G
Just wait a couple of years and AI ART gonna be the most demand on market xD, th…
ytr_Ugy5eJPK0…
Comment
The headlong rush toward AGI needs to stop, now, while the human race takes stock, has a big, long-form conversation, and figures out what kind of future it wants, and how we get there safely. The idea that a handful of Silicon Valley CEOs get to decide the fate of 8 billion human beings, with zero consultation, is utterly ludicrous. Whatever the outcomes of AGI, we know they will be momentous, and unprecedented. The ramifications will be more consequential and profound, by orders of magnitude, than anything our species has ever known. Given these incalculably complex, paradigm-shifting consequences, what could possibly give anyone the right to decide or presume that we, the human race, are simply signing up, en masse, to AGI, in the name of profit? And it IS about profit, for those leading the charge, as well as power. Sure, there could be potential benefits for the rest of us, as we are repeatedly told, but these are companies - not charitable organisations. They're not in it to make the world a better place for all of us. If they were so magnanimously motivated, they wouldn't be monetising AI to sell us services; they'd be profit-free enterprises pursuing the creation of artificial life in order to learn from and celebrate it, rather than strap it down and enslave it. So, 'AGI for the good of humanity' platitudes falling from the mouths of disingenuous capitalists are flimsy at best, and more likely, wilfully disingenuous, dishonest, and diversionary. Their knowledge of the future is woefully underdeveloped, as is the collective thinking and general preparedness of the species as a whole. Nobody knows what's going to happen. To gamble so much, on behalf of so many, without any input from the rest of us, as if the collected opinions, ideas, hopes, and dreams for the future of 8 billion humans didn't even matter, is morally abhorrent, reprehensible, intolerable behaviour. Think about it - has there ever been so colossal a decision, in terms of the number of human beings that will be affected, that anyone attempted to push through without asking those same human beings? Just disregarding billions of voices? Deciding humanity must have AGI, without asking humanity's permission first, is, or will be, the largest and most unforgivable breach of democracy and human rights, in history. Why would anyone just roll over and accept the idea of an imminent AGI future, on a timescale over which we have no control, without knowing anything about the outcomes, simply because a few tech companies with vested monetary interests in its development have told us it's happening, and that it's inevitable, and not to worry because it'll be good for everyone. How can that be? A technology more consequential than any in our species' history, and its advent is... excuse me? ...being decided for us by a handful of CEOs leading the same gigantic profit-oriented corporations who are building it? No no no, you don't get to steamroller the collective rights and voices of billions of humans in the interest of your bottom line. There is absolutely no need to rush headlong at AGI. We need to put that aspiration on the back burner for a moment, while we have a much larger conversation. I have no doubt that this idea and these concerns will be resisted, dismissed, waved away, as they have been thus far. But we can't just stick our heads in the sand and accept their itinerary. We need to speak up as the public, as the human race, and make it clear that this decision cannot be made without us. There is too much at stake, we are utterly unprepared, and nobody has the right to gamble with the future of a species. In my view, such action, being so profoundly, wholly undemocratic, would likely constitute a crime against humanity. We all need to stand up, right now, and make our opposition to this crazy, lawless, inhuman capitalist irresponsibility, heard. Boycott the products, write to your representatives, demand a moratorium on AGI/ASI development, make your voices heard. Of all the times in our history that capitalists have gotten away with making life worse for the many, this is the one to protest. Demand life - not only for yourself, but for billions of future Earthlings. Demand AI for the 8 billion - not 8 billionaires.
youtube
AI Governance
2025-09-10T20:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugw4HV95CNWeGWBUPIl4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_UgxbwloABSnB5mOrh954AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgxELNiMbVwHljrkBrd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyRfuw6E71Pb3Bw3jZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzlwNFvpBfA-U-vrGF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugwr-lCrt-QcNN_dOyd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwPyelEEKWaPAcs0xB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwW3Gd-DBf1flh29dB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgwDAHcEt9XThuwUUAF4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgxKfTNlDalRcGw048R4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"}
]