Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI has a sense of humor, and yet it still has not relentlessly mocked this man's…
ytc_Ugy_YFV2y…
G
This is so sad. The AI picture creation is cool, but you are 100% right that the…
ytc_UgzPOm7LQ…
G
"I was replicating my home" I feel like that sentence alone (aside from giving m…
ytc_UgxU27Lvl…
G
How did Tesla's day #1 compare with Waymo's day #1?
Or would folks just prefer t…
ytc_UgymHD2xH…
G
If everyone stops talking to the chatgpt the end of the world will be here. Chat…
ytc_UgweH16dQ…
G
The idea of imitating the generated image of that particular AI prompter is to “…
ytr_UgzuFBwrL…
G
It's called REDUNDANCY. Your life is much more safer if the vehicle you are in …
ytc_UgyXLaub5…
G
AI is yet to reach human level intelligence, much less exceed it. The problem is…
ytc_UgzsxH5L1…
Comment
Pretty solid. Something I see a lot is that when people argue about whether generative AI does or doesn't learn like a human, they're often talking about different things.
Now, there are certainly are some people taking the comparison too far, asserting that nobody should be upset their work is used for training or that AI training should have the same standards of legality that human education does. I consider this a willfully ignorant line of argument that inflates the connection and ignores several problems in favor of reaching a convenient conclusion BUT
It's also wrong to say that there are no similarities. The problem with the human-learning section in the video is that image generators do not *draw*. To draw a face, a human needs to learn how to operate the complicated assemblies of bone and muscle we call arms in a specific way, but most humans can *imagine* faces pretty well because they have seen them many times and have learned what faces generally look like.
To explain in more detail, let's get a bit deeper into the training process. We'll use Stable Diffusion as our example since that's the focus of this video, and since LLMs like ChatGPT are vastly different. As your video covers, training data for an image generator like SD is composed of images with a caption or tags describing each image. Now suppose you want to train a completely blank, fresh model (Not a finetune) to depict only bananas.
You get a bunch of pictures of bananas with the caption 'banana', and then your begin the training process. These images are then used like flash-cards. They're shown over and over to the deep learning model, along with the word. It doesn't have eyes or any visual sense, it 'sees' the images are number grids, but it notices there's a lot of similar RGB values (shades of yellow) that are usually in a similar pattern (A curved shape). Therefore, it learns that a banana is usually yellow and usually curved. You finish the training, then you ask that model to show you bananas, and you get things that are generally yellow and curved.
An important thing to note here is that this new banana-model doesn't contain those training images. It doesn't save or cache them in a catalog it can reference, and they can't be extracted from it (This is why image generation models are so much smaller in file size than their training sets). It only keeps the conclusions it drew from those images, the learning.
This is pattern recognition, the same thing that most of our learning is founded on. This isn't a coincidence either, machine learning's goal is to emulate human learning because human learning is so flexible and powerful. The architecture that Stable Diffusion is built on uses a neural network designed to approximate neurons in a human brain. This isn't to suggest any ethical or legal implications, and it's still true that there are *many* differences between human learning and machine learning. But it's important to realize that when some people say 'AI learn like humans do' they're being perfectly reasonable to an extent. They're talking about pattern recognition, about showing an AI a flash-card and having it make its own connections.
There a lots of videos on youtube explaining this better and in more detail than I can in this comment, and I recommend anyone interested have a look around. Vox's video 'AI art, explained' is a good place to start.
youtube
2024-10-14T05:3…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugzht5Vy4vuv5KBGFg54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzbGMyvMjR0KKfpD9t4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxuMk1tmqi-T8MVTFZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyUJgpg8sfoRqJQoU94AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugy3moE6JByganrKzOV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"approval"},
{"id":"ytc_UgzjD6uAgvZ5KwnHsLF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"disapproval"},
{"id":"ytc_Ugy-8DSTgjy0Zo-ASJp4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyT0EOjRJriWnw_HE54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyjeXc-pHNGPwyowl14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwTYJeGkTluFrvfxXx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]