Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I’ve never really fully understood the “A.I. art is stealing” argument. I haven’…
ytc_Ugx3RIUXD…
G
I'm sure atleast some people would believe it if it was an AI. BUT THERE ISN'T E…
ytc_UgyGANdmW…
G
I think that misses the issue.
When an aspiring junior says "AI will replace us…
rdc_moxeuer
G
not sirpyes trying to find every excuse and reason that their friend may have sa…
ytc_Ugz5wgPWy…
G
Everyone thinks they can spot a deepfake, but the reality is that its unsettling…
ytc_Ugw6bcLJ0…
G
Except we don't even have AI yet, we have pattern-recognising algorithms running…
ytc_UgxBUawbw…
G
I have got a question :
I mean we are making Artificial Intelligence to make mac…
ytc_UggCaMkzD…
G
Me in court staring at the 40 megapixel ultra realistic AI generated clip of me …
ytc_Ugz6HbOvu…
Comment
AI is not stealing any data, though. When the model is fed its training dataset, what happens is an adjusting of its internal statistical weights, and so it doesn't remember _any_ of the training images verbatim. In fact, it can't.
(Unless a particular image is duplicated far too many times in the dataset -- this can occasionally happen for extremely famous images and explains cases of gross overfit like Midjourney's Afghan Girl fiasco, or the fact that Stable Diffusion has far too exact of an idea of what Mona Lisa or Starry Night look like. But it's not going to happen for anything less famous and overduped than that. Essentially, about 99% of the time you can be sure that whatever you're getting out of the AI is a genuinely novel image -- whatever traces it may have of any training data have been completely transformed to the point that it doesn't even count as a sample anymore. You have to either hit one of the few overfitted exceptions or intentionally abuse image-to-image prompts to get actual plagiarism, and the latter is 100% on the user.)
The process is far more similar to the model "looking" (in a computery way, not like we do) at billions of images, and then "remembering" (again, in a computery way, not like we do) generalized concepts about them. The models can even recombine those concepts into something that likely never existed in its training data or even at all -- a classic example OpenAI used is a prompt "a chair in the shape of an avocado". You probably haven't seen an avocado-shaped chair in reality, and neither have I, but the model absolutely generated all kinds of avocado-ish chairs! (So the idea of "it can't combine various concepts together" is trivially disprovable, making diffusion models far less limited than their training data alone...)
"Looking at references and learning from them" has never been illegal nor immoral in the entirety of art history. Now, some coders have created a bunch of math that can also "look" and "learn" (in a computery statistics way, I'm not saying AI works the same way as our brains do) and apparently it's making people like you very upset, to the point of thinking it's even _remotely_ close to actual theft.
What has _actually_ happened is that a bunch of software engineers have managed to create _something,_ and its existence is producing outcomes that are massively unfavorable to you and your friends. Fueled partially by fear/anger and partially by misinformation, you begin thinking that there HAS to have been some kind of an ethical misstep in this process, that they HAD to have broken some kind of an existing ethical norm to get there, especially with how they got and used the data, which seems like an easy scapegoat.
If you actually take the look at the process in granular detail, the answer is no, they haven't broken any existing ethical rules to get here. Literally EVERY step of the pipeline has strong non-AI precedent of being legit. Sucks to get the short end of the stick, of course, but that's no reason to spread misinformation on the philosophical points and/or outright lies about what the system can and cannot do.
(And considering you _sell for profit_ acrylic prints of paintings that are really just scene redraws from Squid Game, like "The Alleyway", complete with not only the characters, but also the composition and background, YOU of all people have absolutely NO ground to stand on. Some artists may have the right to complain, but you yourself have done something far closer to outright theft than almost anything AI models have ever done. I wonder if you being a hypocrite is incidental to your anti-AI stance or if the anti-AI stance is itself hypocritical at its core.)
youtube
Viral AI Reaction
2023-01-14T13:4…
♥ 5
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugy1D9gh8TYI9I_R8ox4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgxDNro5HpPQ2y5QLb94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyWRD_TjrlUH_Ybs3B4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"resignation"},
{"id":"ytc_UgzLxMDzxMQQ0gwQCXZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxfrehYccz1uXW-OP94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyTVVgI-P3JJ5KAL6R4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyQEN5CAORn_NFDSbd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwOmvGCJ9c0GlPMhKt4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgxFNw13V2AosbjS2Ct4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"disapproval"},
{"id":"ytc_Ugw3SsaWm8iyykPw4k14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"}
]