Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
It's complicated technology so I can't blame you (or the countless others who've gotten it wrong) too hard for this, but this video seems to be yet another clear misunderstanding of how the image generation works, which is sad to see given it's gotten so many views and is spreading a lot of misinformation. It would've been nice to see some actual thorough research into the technology before ignorantly claiming that it's "using artists' work without their consent" and "blatantly plagiarising" their images. I would be glad to explain better how the actual tech works if people are confused or interested, but I can't imagine it would be too difficult to read some introductory articles on neural nets in the meantime since I want to respond to some other points raised in the video rather than just talk about the mechanics of the image generation. But I would be glad to do that if people still have a hard time grasping it after doing some more charitable research before denouncing it. Believing that the AI is simply copying or "stealing" anyone's work in any way is a pretty clear proof that you don't know how it works, because the model necessitates that it's impossible, as it's not using anyone's work for any purpose any more than a person sees your art and gets inspired from it. So my question to people against this form of art is, would you blame someone for simply seeing your art and taking inspiration from it, if their art style was similar to yours and they were very good at drawing similar strokes and coloring in very similar ways? If yes, we can have a conversation. If no, then you don't understand the tech well enough. The AI is not using anyone's art period, with or without consent. The only ethical question there is, is it unethical for art available publicly online to be seen by others; artists whose work is online simply cannot filter through every person who might see it and stop them from being inspired by it or acting on their knowledge of what your art looked like in producing their own. I don't even know how you would begin to regulate such a thing if you were genuinely upset that people tried to copy your art style from looking at all your drawings religiously and wanted credit. Isn't that how we learned in the first place? Entire art movements evolved because artists were inspired by others and learned to draw from others. So criticism without understanding the tech in this way would involve us trying to legislate people's eyes: If you learn to draw, you gotta do it from the ground up or with express consent from the artists...and if you see someone's art, you better close your eyes and say "nananana not listening" and try your best to NOT be influenced by it, because if you take too much inspiration then you might be blatantly stealing! So it's VERY important to understand the technology since discussing regulation with people who share the common misconceptions about AI art would lead to these absurdities and more. Additionally, "blatant plagiarism" examples come from either 1. programs that DO engage in questionable methods like copying/pasting stuff directly from the internet, or 2. hyper-detailed prompts that are designed to match perfectly an already existing image. 1 is obviously wrong and I'm sure there are already laws against that lol but if not, that's another thing we should focus on when we figure out regulations for the actual neural net tech that DOESN'T copy paste. As for 2, it sounds problematic, but the image that was produced, though similar, was generated completely impromptu without any knowledge of what the original images looked like. If someone made a near-perfect forgery of the Mona Lisa from memory, you would blame the artist similar to how you'd blame the person generating the prompt here, but as I said there are already plenty of laws against closely reproducing copyrighted work or forging art, whereas it's incredibly hard to copyright an entire style of art, so most of the legal concerns of the people opposing AI art fall flat currently. I understand how frustrating it can be; just the idea that someone can produce art that looks very similar to yours can be angering due to how much more effort you have to put in. The important thing here is though, if you were shown-up by someone in art class, as annoying as that may be, if they're objectively not stealing or plagiarising anything, then that's a pride problem and not a legal one. And I sympathize with it: of course since I would hate for it to happen to me as well, but being able to create similar art much faster and more skilled can't really be justification for attacking an entire technological advancement (coders are going to have similar gripes soon as the same technology is already being used to write perfect code in seconds, hours or days faster than it takes professional coders). Let me know if I'm wrong, but the frustration seems to come from 2 main areas: 1, that the AI is able to generate very similar art styles to those whose images were included in the training data which is unfair since they didn't consent, and 2, this could seriously impact or harm the livelihoods of artists in general. So with that established I'll try and answer both major complaints. On 1, perhaps we could work on setting some legislation in place where artists are compensated for having been involved in the training data. Because of the way it works, it might be pretty straightforward to give some kind of royalties to artists whose names are used in prompt generation, or prohibit people from generating prompts with the actual names of artists or photographers. It gets more difficult when you consider the fact that ALL art, photographs, and so on contribute to everything it creates. So if someone says to make a generic pastel painting of a tree, it would be impossible to know who to properly credit for this image since all generations are an amalgamation of every single image that was in the model. As I said before, trying to regulate this would be ridiculous and the AI is generating completely new images from scratch based purely on its knowledge of all the images it's seen before, not copying anything pixel-for-pixel, so once we're all on the same page with how it works, we can definitely discuss who should be credited for what and how. To the idea that artists are losing or will lose their jobs because of this (#2 in my grievances list above): If I had it my way, artists would be significantly compensated, not just for helping "contribute" to training the model but for them existing as humans and engaging in the arts to begin with as should anyone with any career that takes effort, dedication, and should be commended. But unfortunately, in a heavily capitalistic world, businesses come and go, just as all the manufacturers of VHS went out of business, or just as any career could go out of business depending on what type of new technology gets invented. Doctors for instance would all lose their jobs if we invented a pill that could heal any illness/injury or restore any organ damage (it'd be in the doctor's best interest to make sure you don't get better or they'll lose a job in this system lol so criticism on that front is just ridiculous in general). I support compensating disciplines whose existences are challenged due to AI because I recognized how fucked the system is, but to someone who supports this type of world, it's not AI's fault that this happens, it's just a product of the times. So I think criticizing AI art through profit is a pretty weak route to do so, considering the people criticizing it are the same ones who deeply appear to care about profit, whereas the very system they're supporting is contrary to their intentions. Essentially, if you're using capitalist means to criticize AI art (but they won't make profit anymore!) then don't be upset when a natural consequence of capitalism happens and a business comes and goes like this. It's a sad fact of our current shitty system, so that's why personally I support compensating them wholeheartedly, since no one should lose their livelihood. But remember, I criticize the SYSTEM here, not the new technology, that the creator of this video has even said he's not trying to stop. In summary, most of the attacks against AI art are heavily misguided due to not understanding how the tech works, and due to a strong emotional appeal that real humans are affected by this and already have to struggle so hard to make a living. But after seeing that this art isn't actually copying anything and is simply a new product of the times, and after understanding that the system itself that would even allow artists to lose their careers or livelihoods because of a new invention like this is incredibly fucked up, then we can actually work together to make the change that we want to see. Surely nobody wants artists to actually be unable to support themselves or is actively rooting for their demise (maybe a couple internet cringe contrarians or something but realistically, if an entire field becomes obsolete it obviously sucks for everyone involved). Even in the "passive aggressive" email mentioned in the video, the competition directly goes to supporting real artists so the action definitely supports their claim that they really do support live artists. If people learn how this technology works and start rationally discussing some potential harms and how we could better legislate this new tech, then we could reach some fantastic collective conclusions as opposed to shitting on each other for no reason. I really don't think there's much disagreement among the properly informed and among people who actually care that others make a living. Of course there's gonna be scumbags on either side, but "no AI art" doesn't have to be an entire movement, because when understood, and when carefully dissected/discussed, there is nothing to fear.
youtube Viral AI Reaction 2022-12-27T01:1… ♥ 1
Coding Result
DimensionValue
Responsibilityuser
Reasoningdeontological
Policynone
Emotionmixed
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgwinzdpHqsoWEH3ho94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"approval"}, {"id":"ytc_UgwG1CdIviMGF7EhC3t4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw0w0tzTv0t3yCX6Wd4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgydsIgXfnoRwzK3UNN4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwSQfh1gb0Zo9U6TTh4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxAVf0U4KBoMzqfBQp4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzM311wn8C3a-bAL_54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzXPCmZ7RwlD6Z4xUB4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxozpvB2Wc1bDK8ByN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"}, {"id":"ytc_UgzWPpHKsWuIyJvCGzV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"mixed"} ]