Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I watched this video back when it came out... and it's still deeply, deeply haunting and bothersome to me. Not because of the reality of the situation, but because how much extreme misinformation there is in this video. This is the kind of content that makes AI far, FAR more of a danger than it already is, and it's really disheartening to see educators putting out stuff like this. Warning: Like I said, watched this on its release over a month and a half ago, and it's such a long video, so I'm largely working off of memory since I'm not keen on watching he full hour+ again when it frustrated me so much the first time. It's already eating so much of my time and effort just going over these topics. For all of these topics I think it's important to keep plinko in mind, because that's 90% what an AI is, and that's what I'm going to be referencing repeatedly. Youtube won't let me post this, probably because it's stupidly long, so we'll see if breaking it up helps... FIRST MAJOR POINT Lines like "How are they doing any of this, we just don't even know." (12:14) are just misleading. There is nothing weird about an LLM, there is nothing truly unpredictable. Not truly. No, we can not EXACTLY say what will happen when you drop a query into an LLM or a disk into plinko. This does not mean that plinko is sentient. And I can not stress enough how these are the same things. LLM is a machine where data (disk) hits transformers (pegs), which then bounce that data off in slightly different ways depending on how it hit using incredibly complex math (physics to the extreme, people can not predict this) it sends the data (disk) off in slightly different angles, which determines which transformer (peg) it hits next and the exact way it hits that transformer (peg) just keeps rolling these complex equations moving data down a distinct path, top to bottom. What LLMs do, specifically LLM training, is try and rig those pegs. It's a scientist sitting there with the plinko board and measuring the exact spot they're dropping the disk from and trying to make sure it always arrives in the same hole at the bottom when dropped from that spot, or at least close, and the scientist does that over and over and over. When it falls into the wrong slot, LLM engineers call that misallignment. Just because his peg board has hundreds of billions of pegs does not mean it has suddenly gained intelligence. A rock has untold billions, trillions, or more of atoms shifting and bouncing in ways we can only barely begin to concieve. It's unpredictable like waking up in the middle of the night to an explosion and going outside to find your car blew up from being hit by a meteor. You couldn't have guessed that. You might call it unpredictable, insane, an act of god, whatever, but it's not. Was it unexpected? Sure, in some ways, but it also wasn't "unexpected". No part of what happened was outside the laws of reality, or even what we expect. We know meteorites land on the planet. This stuff happens. You could not, in your specific understanding of the world, have predicted it... but it was all just physics being physics. None of this makes a plinok board sentient. Just because we do not know the EXACT physics of how every microsecond of the disk's fall and impacts does not mean that we do not know how a plinko board works and that it is instead sentient. Trying to upsell LLM as something more than it is seriously the biggest danger we have right now by far. It confuses the casual people about what's going on and it confuses the people in power who are writing laws and controlling companies that are implementing this at the cost of employees, consumers, and our entire species culture. It leads to this false belief that LLMs are more than predictive data. SECOND MAJOR POINT As I already called out above, but since people like to talk about LLMs "thinking" I'm going to say it again. LLMs do not think. A disk simply falls through the plinko board. A "reasoning" model is something that takes what it just said and throws it through a plinko board again to try and determine if the results of that second run are still correct and match up with the first. We have "reasoning" models because there is absolutely zero "thinking" involved in the first place. Because the results are so wildly guesswork with no logic involved that the most basic of double-checking can reveal massive flaws. If it was doing what it was supposed to be doing in the first place then there would be absolutely no value to "reasoning". But all this also further hammers home what a plinko machine this all is, because an LLM can not think. It can not bounce ideas back and forth. It can not formulate thoughts. It is incapable of thinking before speaking, which is why it has to "reason" by restarting and doing it all over yet again and running its own answer through the plinko machine. As sentient life we are able to take thoughts and examine them from different angles. For us, the plinko machine doesn't go one way. If we need to go back we can. Our internal plinko machines can do whatever they want, they aren't held in the grasp of gravity. For an LLM it can only move top to bottom. It is a defined route. Data goes through them one way, they do not linger on it, they do not reverse course, they do not contemplate what they are saying. All they do is spit out exactly what they were designed to say, and sometimes that slop is so bad that running the plinko machine again is able to call out their own slop. But their data moves from Point A to Point B. Only top to bottom. Only left to right. Sentient life does loop-de-loops and spirals, we double back frequently, we scour the perimeter, we think back and forth between whether the car this morning was blue or red. We have indecision. THIRD MAJOR POINT How the LLM """"thinks"""". LLM processes ones and zeros. Nothing more. Even if an LLM COULD think, even if there was something there, LLMs do not think it letters. It does not think it words. Or pictures. LLMs think in ones and zeros. LLMs' entire consciousness is ones and zeros. There are no letters. A """letter""" is a series of 8/16/32 ones or zeros that shows up repeatedly in the data set again and again. The AI does not know what the wort "at" is. What the AI knows is that there's this series of binary 01100001 00000000 01110100 00000000 that shows up in the data set, and that it is frequently flanked by spaces on each side, 00100000 00000000. All the LLM "knows" is 00100000 00000000 01100001 00000000 01110100 00000000 00100000 00000000 and the series of ones and zeros that most frequently occur before and after that. Even assuming an LLM could think, the entire concept of a letter is like humans trying to conceive of the fourth dimension. Letters are so UTTERLY AND COMPLETELY beyond their capability to imagine even in the first place. It's like magic. It's like drawing a pentagram on the ground and someone describing to you that you just made a whole campatimop for the arkinsomoton that live in the fourth dimension so that they can auroms their zomqim. These are ideas that the human brain can not begin to process. They can not ever truly conceive of something beyond a zero or a one. But again... LLM isn't thinking in the first place. This is all just predictive DATA based on previous patterns. It is, again a plinko board, and once again implying that it is more that just confuses everyone and turns LLM into this thing that's so unknowable and so powerful when it absolutely is not. It's what convinces people they should put faith in it. This is also why it is entirely unnotable that "LLMs" can also be used to generate pictures, video, and music. Because this stuff is all exactly the same to the LLM. Everything here is just different patterns of ones and zeros. If it can predict text it can predict everything else just as well if the model is tuned for it. There is absolutely no difference to the LLM itself, only how we go about weighting its training. The same thing happens with agents. It's all ones and zeros. The LLM reads an image of your computer screen, or it reads the HTML template of the website you're looking at, and uses those same 0 and 1 inputs as it uses for the messages you send it to output 0s and 1s back. Only instead of data that gets translated to text for us it's data that is translated to mouse movements and clicks.
youtube AI Moral Status 2025-12-30T06:4… ♥ 2
Coding Result
DimensionValue
Responsibilitycompany
Reasoningdeontological
Policyliability
Emotionoutrage
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwR0KTcJzfZClYUfUp4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxmLnN8aUrKs8NdHmd4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugx3atiBF_UAseYEMyN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugz8KnisJP2_V8gVV2Z4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwmQ9qsKgncJ4oPhDB4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugz2rapkG0ziXD2ZObh4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugwn8qj6IYR7McEx7EJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugzb-AhnURvnECZExOJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyXTdJz7q3st2ci12t4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzuDifMoN7y0Md-jT14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"resignation"} ]