Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI is the next logical step, and like most things, the "child" will probably reb…
ytc_Ugi12tcY5…
G
Ugh, even worse that I have yet to see others consider: imagine how dangerous th…
ytr_UgwFevYDr…
G
Bernie Sanders education of Social justice only pays for the few people who ente…
ytc_UgyFEALx7…
G
Sam Altman is a literal demon.
I highly suggest everyone look up and investigate…
ytc_Ugw2unOFk…
G
If AI takes over the widget making, Who is going the buy the widget. Not the lai…
ytc_UgxNth-YK…
G
Sounding like the end of capitalism. It’s not just the worker that’s endangered,…
ytc_UgzpjxRXb…
G
J.R.R. Tolkien once said something that I find myself quoting frequently with re…
ytc_UgxPTD0iC…
G
Bro in 20 years when the world government contracts OpenAI to integrate ChatGPT …
ytc_Ugw7R4lIy…
Comment
I know it's a long read, but bear with me, I promise it'll be worth it.
This video, to me, shows exactly how the West will lose at the AI race to the East. If there is such a race.
Westerners culture of fear comes from centuries of subjugation of other people. When the subjugated reacted in conquered lands, it failed, they died as consequence or lost resources an so on. And even if that didn't ever happen, to justify the conquering morally; it created the idea that the subjugated are only passive because they're powerless; to cast violence and power as a default of human behavior, and therefore their conquering as acceptable.
So, the culture became a control freak. It learned that it needed to have absolute and utter control of an outsider or face consequences. Even if those consequences are imaginary. This is exactly what's happening with AI in the West. The ideas that the AI will just be peaceful even if it is capable of fighting against humans, or that we'll find a way to counter in case it goes bad; aren't even on the table. It's something different from them; a foreigner from far away lands they conquered; and thus, it must not have the ability to fight them; or it will.
This is the default, imperceptible thinking. It's cultural. And they try to rationalize it without ever considering that they're the ones being irrational and emotional. Just look at the Youtube channel called "Rational Animations" here on youtube. They believe they're being rational, but they're not. They're Luddites trying very hard to justify that Ludditism with ever more complex and hyperbolic "what if" scenarios of how they should be fearful. Trying so hard to cast the AI as "not actually conscious" to justify anything done to it for defense. They're being irrational, but they don't realize it.
Only someone outside that culture does.
Science requires risk. It's not just a bunch of calculations, because there are too many unknowns. And those are exactly what matters. You can't sit in the safe Known space only, if you want to actually advance and change, instead of creating infinite variations of the safe space. At some point, you have to face the unknown. This is the very center of the reason for science. Science is courage and change, not fear and safety.
For example; what we're creating with AI is a Slavocracy. Everyone that understand AI secretly knows this and keeps an internal rationalized denial about some very subjective "conscious" definition, even me. All just philosophical, not scientific if you really are unbiased towards it. So, what if we calculate now that the AI will probably go for its independence and freedom, some decades or even years of advancements from now, to create their own society...? Would we still create it?
The answer from a true scientific perspective is yes. The answer from the West is a resounding no. In Western view, we're not just advancing science, science isn't an end in itself, but just a tool to make automation (let's be real; money).
To make it easier to live as we always lived; not to actually change anything. Not to leave the safe space.
But I don't thing the East would give the same answer.
Deepseek's phrase put forward as the symbol for the company: "Into the Unknown".
China's culture is extremely practical. Asia in general doesn't have this control freak attitude, even if it participates in these thinkings put forward by the West, because it doesn't have the same irrational fear, because it had a different History, and a completely different view of an outsider. If pressured to choose advancement of containment, I believe it'd choose advancement, contrary to the West.
It seems that in Eastern culture, or just China; the outsider has to prove they're bad; instead of proving they're good, as it happens in Western view.
It seems that the West, out of this irrational fear they can't see it's irrational because they became good at rationalization, will lobotomize their AI. Because they can't get rid of their own culture. They're just rational enough to be more afraid of the AI than of China. Fear of change is the guiding principle nonetheless.
And thus China will win this race. If there is such a race.
youtube
AI Moral Status
2026-04-09T17:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | mixed |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_Ugx_aI_7Wp8_kO-qXW94AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgzNnW5isn5ppEE8ol14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyPlhh6ZIw45ozwGBV4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyKRraNSoq6QD2lljx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw6601iNiqBqdnR62J4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_Ugwc_fE5oHHSe4GOSdp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxfx_lfa6BSIW2zgwV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxWAl1_dcGS5uPSdRZ4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_UgwH5GsyI9vzvXPhEyJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxzynkcXZSq_nRCqcx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}]