Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It says on all ai programs not to take medical advice from them, it literally sa…
ytr_Ugz3OeihU…
G
That's why I don't think it's a robot I think they're putting human brains insid…
ytc_UgwwcMRzU…
G
No, it can't. I don't have any emotions, except for unadulterated anger at the t…
ytc_UgwDGJG6h…
G
God's will divine federal strong central authority unity AI with citizenship pay…
ytc_UgxZk_DeQ…
G
Sounds like better prompting is needed...
AI aims to please and does what it's t…
ytc_UgzpQ-a3B…
G
Except diffusion models are not human. They do not "learn" anything. They don't …
ytr_Ugyy4QTyx…
G
Ai generated images could take their job. they’re not crying, they’re just tryin…
ytr_Ugzdi8aP_…
G
no work, no money, no consumers, no business. we eill all turn (back) to growing…
ytc_UgxxWB3BD…
Comment
I see a lot of generative AI hate here, so I know I am probably walking into trouble, but I find this topic pretty interesting, along with the video and concept of corrupting images like this. So here are some of my points of view, as I think we need to find productive solutions that work for generative AI, as I don't see it dissapearing again - Pandoras' box has been opened.
Interesting, but this approach poses some concerns to me as someone who wants to do game development full-time. Artists that get paranoid of having their art used in training sets, might strongly consider to "poison" their work intentionally without disclosing that they do, which could have unforseen consequences, even if not discovered.
Imagine comissioning an artist to do work, and they sprinkle in a bunch of intentional data corruption out of fear. On the receiving end, they may not be able to notice this, but the end result is still inferior as a result, which some people might consider sabotage or fraud/scams, especially in a professional setting. This could very well be defined as malicious data.
For a professional setting, I would much prefer there to be some proper licensing methods, as we use generative AI as a tool. Fonts have licenses, sounds have licenses and of course artwork itself has licenses, so why not training data generated from curated artwork as well?
imo, poisoning images might be good for feeling a sense of revenge, but in terms of the mentioned wider-scale resource costs of generative AI, it only makes it worse. I don't see a stop to generative AI, for the same reason that AI in all other aspects of software has only exploded in the last decade. I get the frustration, and generative AI should not, and cannot replace artists in its current form. Maybe if true artificial intelligence becomes a thing, sure, but that would pose some slightly larger ethical problems than artists being unable to make a living.
I am curious though, cause all the points made here seem to relate to generative AI being used in a commercial setting, but what about for personal use such as entertainment or exploration? I mainly use AI as a toy to play around with, and see what it can be used for in a practical sense, but posting generated images as your "own work" is just rediculous. I don't find prompt-only based generation to be super useful outside of entertainment, but img2img on the other hand seems really powerful. You can very quickly generate some shapes that have an artstyle or color theme you find interesting, and work further with it. But you will still need to put in a lot of work to get something usable in the end. I find that img2img is not perfect, so if you need to be restricted to certain constraints such as symmetry, you will face issues. However, it can help you with shading and lighting in some cases, which can be quite useful on complex geometric shapes. Not a replacement for light and shading tutorials though, as you still need to go back into your painting program of choice and do manual work. If you know exactly what you want to draw, and how to draw it though, I still find it easier to just draw it directly.
Recently, I tried rendering out some Minecraft blocks using Blockbench, and used a custom Stable Diffusion setup I have to use generative AI sort of like a generative filter? It worked okay for simple blocks like deepslate, which is just a gray stone, but even a block of grass with a direct reference image was hard for it to comprehend. The grass would often be at a different scale, think grassy open plane, but the dirt sides of the block would be what you expect. A bit like a snowglobe effect I guess? Alien blocks like crimson nylium though was almost impossible to do. You had to trick the AI by using substitutes such as red mossy growths and red rocks, cause it does not know what "netherrack" is, and the original block texture does not have high contrast between the red nylium and netherrack, so it would blend them together a lot. Skulk blocks was even harder, cause it is very unique and alien-like. I spent an entire day to get usable and relatable renders in the end, and it was good enough for my use-case, but they are far from perfect and would require manual work to make them look truly good. The thing is though, I could just have used the vanilla block renders I made instead to achieve the same effect, but the re-intepreted images that were generated seemed kinda fun and interesting in this case.
It does pose another curious question though. Would these generated images be considered a remix mixed-media-like kind of asset? with the "mixed media" being the data in the models, as the data is not actually stored images directly. I rendered out the input images myself, applying the official textures to a custom, but simple block model. The input image accounted for a good chunk of what the end result images would become, and would dictate color schemes and even shading in some cases. The images could only be created if you had the exact same image resources that I made myself, and only if you used the exact same prompt and weighting of the training data used. In some cases, you might be layering multiple main models with lora and multiple steps for resolution scaling and what not, so at what point would you consider the generated images as derivative creations made to mimic something else, but still their own thing? When does the end result become "original enough", I guess is the real question. Artists of all kinds have taken inspiration from the world around them in all ways you can think of, to that point that we have copyright laws and such to protect and empower artists. At what point though, do we need to consider if generated content like images and songs is considered a remix, or inspired "work", rather than outright theft? There are lots of inspired artists out there that do not pay up to those that inspired them, cause we have accepted as a society, that if you do it under specific conditions and terms, it is fine. Video games that are clearly inspired by other games, music that use the same beat or chorus with altered lyrics as another song, animators and painters that learned by looking at others' work.
My main issue I have to deal with, is that people are being contacted by scammers on e.g. Discord trying to sell you a sob story, posing as an artist trying to sell you "emergency commissions", where they use either stolen art, or generated images, as that is impossible to trace to a source. It is so bad now, that a legit artist would have to be very careful with their wording in order to appear genuine, which is saddening to see. Beyond that though, the only other exposure to generative AI I run into are silly video-edits of pop culture using AI voice and imagery of original scenes, dumb silly AI songs and the occasional generated image posted by people I know.
You are clearly a talented artist though, and if I were in a designer position at a game company or on a movie production say, I would never outsource your job to a generative AI. I might consider including generative AI into the production pipeline during planning and pre-production as a tool, but why would I want an artist to waste their time on e.g. cleaning up generated AI imagery, when they can make something much better if they work from scratch?
From my experience, generative AI is just not consistent enough. You say it yourself, it creates a "frankenstein" mish-mash. This is particularly apparent if you try to generate images of even well known characters. The style is kinda there if you use the right models, and the elements of the design too. But then the details get repeated into places they shouldn't be, or the age of the character is off, proportions are screwed. Same with landscapes and other things, the AI models just don't truly understand the fundamental concepts of our world. Even when generating hundreds of images with the same setup and prompt, you will still get images that are too different to be used as-is, so I can see how it may be used for cases where you just need 1 image for something, but how would you use this for a comic, or even a movie with moving images? Game debvelopment, how would you get a rigged 3D model? Even if AI could produce 3D models, what about topology? symmetry? poly count? The way I see it, AI generated content still needs human involvement to clean up the inconsistencies and mistakes made. What I mean is, that you still need to be competent to something that looks decent, and even then, you could make it much better from scratch by imposing your personal style and so on, but outside of a professional setting, does it really matter that much?
I guess a big part of the concern for artists in general, is that they now have to compete with generative AI, which is a major challenge for sure. Sub-par artwork is just not good enough for most people to warrant spending what money they have on commissions, so they might be more inclined to try and generate something close to what they want... But, again, if they want something super specific, like a character with X, Y, Z features in specific places, a particular color palette or what else you might think of, then having an artist draw it is still gonna be a much better option if the artwork is important enough, assuming the artist is competent enough of course.
In the end, the employers that decide to try and replace their workers with generative AI would appear questionable to even work for, but I get why that is not exactly super useful thinking if your job and livelyhood is on the line. However, you could also argue that this indicates an underlying problem not caused by AI, but just brought to light by it.
youtube
Viral AI Reaction
2024-12-12T01:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | mixed |
| Policy | industry_self |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwrPcbkSYTxq9530cV4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugyme5AMAsnlr6_IM4h4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgweLC-4vEbCnA4KbA14AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgwY50ShHTvelP_gsM54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzNe7RiPjgg8sqwGqB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzGC-_wG4vtKQspLCd4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgySfW3HNnyma34wxqd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwloSHd-4wftqURC354AaABAg","responsibility":"user","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyQ8dhVflmVWWb1Tdt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzU5OBs-y8q6nJZELx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}
]