Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
sorry, AI is ALREADY being used for domestic surveillance. how do you think the…
ytc_Ugy29IU4P…
G
Let’s be honest, AI is coming for taking those white collar jobs: accountants, f…
ytc_Ugy5RjUMK…
G
maybe this is why there is no other life in the univers because they were kill b…
ytc_Ugzy6XqzR…
G
I want to see the letter and who signed it. Did anybody find the letter?
Edit: f…
rdc_cthv75z
G
AI engineer here! Chinese models and American models both uses mixture of expert…
ytc_UgyfPdsyt…
G
Basically, AI is going to be a nightmare for introverts in the job market if the…
ytc_UgzTynx5C…
G
This video was made three weeks ago and it’s already outdated. Have you seen the…
ytc_Ugz-Lknqi…
G
You think reworking the money system is impossible? The AI earns the money, and …
rdc_oh3478t
Comment
FOURTH MAJOR POINT
33:53 It's not every conversation. It's every message. Because again, it's just a plinko board. There is no program that you are having a "conversation" with. Every time you send a message the entire chat log is sent to the servers and that entire chat log is what it put through the plinko machine to determine how the disk falls and what it eventually will put out. This is the definition of the "context" window and why it exists. There is no "person" on the other side. There is no thinking, no consciousness, no waiting for your reply. This is just a black box equation that you feed text into at your leisure on one side and stuff comes out on the other no different than any other mathematical equation. Again, a plinko game. The plinko board doesn't do anything until you feed in the disk and that disk bounces around through an inanimate object as determined by physics and it lands in the slot it lands in.
TL;DR
I can not say it enough that just because something is complex and with so many moving parts that we are incapable of predicting it, something like a literal rock off the ground, that does not mean that it is intelligent or capable of thinking. It does not mean that we should revere how unknowable that rock is. It also doesn't mean we should base the rise and fall of our entire civilization on nanometer-precision predictions on how the rock will break if we hit it with a hammer.
AI is not smart. AI is not dump. AI is a rock. It's a river flowing through a plain. AI is the motions of sea currents. AI is the way water forms into crystals. There is no dumb here because there is no intelligence to call dumb. You would not call a rock dumb in the way that you would a human, because there is nothing to be dumb unless you're talking about what a dumb shape it is for making you trip. But you would not legitimately evaluate the rock's intelligence, so why do people do that with AI? Because videos like this mislead and try to declare than "AI" is more than exactly what it is.
The fact that you can make a plinko board and not know the infinite ways that a disk will bounce and always know where it's going to land is so laughably obvious that no one would ever expect you to be able to. And yet for some reason we've made a digital plinko board and everyone has decided that despite the fact that it is acting exactly as it should, within all the parameters as it should, because it hit a peg and bounce off in a certain, NORMAL way, that the plinko board is possessed and we don't know what's going on. This does not mean that the plinko board is not obeying physics. This does not mean that we do not know what is happening. Yes, unexpected things will happen, because there are billions of factors that go into the equation. You throw a ball at a wall, it bounces off. We know this is how this works, this is physics. The current AI fervor is that we throw a ball at a wall, it bounces off, and it does not land in the exact 1cm x 1cm spot we want it to after the impact. It's just... absurd.
BONUS POINT
"AI"... is not AI. Not remotely. Some of us who are old enough and cared about tech back at the time might remember back in the early 2000s. There was this fancy new TV/monitor display technology on the horizon. LEDs. There were going to be LED TVs, and they were so cool and interesting because every pixel of the screen was going to be made up of three individual R, G, and B LEDs. This wasn't going to be just another LCD panel, we were going to get past all the issues of LCDs with these now LED TVs.
And you know what happened? An "LED TV" came out. But it wasn't an LED TV. It was a LCD TV. Same exact thing that LED TVs were supposed to replace. After years of building up hype about what an amazing and great now technology LED TVs were going to be, and LED TV came out that is in no way what any of those promises or expectations were. It hijacked the hype train completely by taking a product name that everyone had an expectations for and placing that product name on a FAR cheaper and inferior product. It was a name that made absolutely no sense. You did not go to the store to buy a "CCFL TV" in the 2000s. You went to the store to buy a LCD TV. But now here we are, and suddenly we care about the backlighting more than the actually display panel itself. Millions of people were deceived, because after years of harping about how great LED TVs were going to be.... here they are! On shelves! Cheap! It was a miserable marketing bait and switch.
Later, as true, actual, legitimate LED TVs came out companies had to scrounge for names like OLED to differentiate themselves from the false LED TVs.
It was perhaps the second greatest bait and switch I've ever personally watched. The first is, of course, AI.
AI is no different. People took a term with grand implications and slapped it onto a product that is so far below AI that it isn't even remotely the same genre. They've aggressively fed on all those preconceived notions of what "AI" means to upsell their product into something it is not remotely approaching.
And it's already happening again. First super intelligence and the 'singularity' was supposed to be REAL AI... but already these companies have been dragging that name down to their level of just being LLMs but better.
LLMs are absolutely not even remotely AI, but the marketing has been disgustingly effective in getting people to think LLMs are MASSIVELY more than they are, because everyone "knows" what an AI is because AI is a term that has been instilled into our culture for decades upon decades, this expectation of what AI is. And now a different product segment has stolen that name and is using it to manipulate and mislead, and these videos both just prove it and expand it, this twisted belief that LLMs are more than exactly what they are because people want to say that since they're called AI they must be AI. They are not. They are text prediction with just enough moving parts for people to try and say that they are more than they are, that and very minor frills like the ability to re-query themselves for reasoning.
MINOR POINT
36:13 Even more upselling. And this is where this stuff just gets all the more frustrating. It's doing when he was JUST TALKING ABOUT DOING and repeating patterns from the data it was fed in. I don't understand why people treat this as different than everything else they say before and after.
There.... there is just way too much here. And this is far too tiring, and depressing, to go through. I'm sure there's a dozen more topics to go over, like how LLMs are incapable of actually doing math, but hell knows if any of this will ever get read anyways. I started this days ago and... at least hopefully have all the grumbling mostly out of my system since I haven't touched it in a couple days.
But food for thought. It's just frustrating to see people trying to upsell LLMs as more than they are and perpetuating these unfounded beliefs.
youtube
AI Moral Status
2025-12-30T06:4…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[{"id":"ytr_UgyGGsy-Cc7bvs5zlWR4AaABAg.ASrQ3_JZF5nASriwQPD-TO","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},{"id":"ytr_UgzR-0PJGnzbvV61HSh4AaABAg.ARrnMzyzSp3ARtyMrc1fbe","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},{"id":"ytr_Ugwn8qj6IYR7McEx7EJ4AaABAg.ARLgqxzPxLDARLgsSPqZSA","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytr_UgwVxmo4X_hgu3PncmF4AaABAg.ARIwloZMbAhASqkFgk3Wyf","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},{"id":"ytr_UgyHL_P9RKGxs5Xz2JV4AaABAg.ARFYlJwJVZAARFZ7bS6dQy","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"approval"},{"id":"ytr_Ugy7AdZ5QN-ymetkA8B4AaABAg.AR5jLKQn_fWAR5jqpCK1zG","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},{"id":"ytr_Ugy7AdZ5QN-ymetkA8B4AaABAg.AR5jLKQn_fWAR5ju9n4obw","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},{"id":"ytr_UgwGjcwuCAXJBOSJhFF4AaABAg.AQnr6aIZonGAQns2SabEzX","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},{"id":"ytr_UgwzagCkWVDZSnfAQHV4AaABAg.AQbwBEncofwARgev8HZeQO","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"},{"id":"ytr_UgwzagCkWVDZSnfAQHV4AaABAg.AQbwBEncofwASz05Z0qKU4","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"indifference"}]