Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Just remember, artists can do ANYTHING. Ai cannot. You can draw the ai commissi…
ytc_Ugw8v9Opp…
G
disabled artist praying for the downfall of ai here! multiple symptoms of my dis…
ytc_UgzOePzYs…
G
In my opinion, AI is three steps from deadly.
1) persistent state.
2) persist…
ytc_UgzryG0qg…
G
How do i make this short. None of this is shocking. For me, it makes me happy t…
ytc_Ugy-p1P5Q…
G
@titankronos65173 Not every drawing software is with generative AI you dunce.
…
ytr_Ugx7vQMxL…
G
Why do you think Elon is making a chip that can go in your head to make you into…
ytc_UgzFqxqkC…
G
@xRafael507if you check behind the models you will see that all they promise is …
ytr_UgzwkppzS…
G
I don't know who said but I like this quote: "AI could have been a meticulous to…
ytc_Ugwqyhhhf…
Comment
@laurentiuvladutmanea
> But the companies who made them did. That is a fact.
And? Web scraping in general has been established as fair use for some time now (google Common Crawl if you don't believe me and read its wikipedia article -- by the way, Common Crawl eventually became the basis of LAION as well), so this is a moot point that has already been decided the other way, and for damn good reasons. (Hint: image generation isn't the only thing these datasets are used for. Think image recognition, automatic captioning, etc...)
> This claim is incompatible with what I found.
What you found are probably either a few specific overfit images which the model did overmemorize or abuses of img2img image prompts. Both of which I've mentioned.
> 99% is still not good enough. And it is more like 98.2% from what I found.
If you're referring to that one single paper, some of the conditions used for their testing were a bit artificial, but even so, the majority of the issue also comes down to overduplication and resulting overfitting. Which machine learning researchers and developers _don't actually like_ because overfitting is a bug! If you think this tech is going to improve, it's naive to think overfitting issues won't be solved at some point, and probably soon.
> These rules are about humans. Not mindless programs designed to replace artists.
A human brain is nothing more (and nothing less) than a giant biological machine evolved to interpret the dataset gathered by our senses.
The fact that now code can do something similar, but faster (yet less flexible) doesn't change anything. The ONLY thing that has changed is scale.
So no, I'm not biting on this one. There's no good reason to change this rule just because machines aren't humans or whatever, and any attempts to say otherwise look more and more hypocritical by the day.
> It is. All I found about AI ethics make this immoral.
Whatever you found is probably wrong!
> No. There is no precedent, because programs designed to replace artists on such a scale did not exist before.
"You can do X but you cannot have a program doing X for you" is an idiotic rule that has no good reason to exist. It's hypocritical to insist otherwise.
If these models did anything beyond "look at images and extract concepts" I'd be with you, but looking at images and learning concepts has always been fine. It's stupid to declare it's suddenly not fine just because code can do it faster.
Besides, if there _is_ no precedent, that would mean there is no established ethical rule _either way._ I believe the non-AI precedent is strong enough, but if you don't, it means you're trying to say "this is unethical" without any sort of existing norm to back _you_ up as well. The only thing you'd then be able to reasonably say is "ETHICAL STATUS: CURRENTLY UNKNOWN". You're contradicting yourself here.
> What is wrong with him living by doing what he likes?
Nothing, really, but the piece I mentioned is so directly derivative that it might not even hold up as transformative. Sam should be the LAST person to scream about "AI theft" or whatever. It is unbelievably scummy and hypocritical for him to complain about being copied while he himself has copied something SO blatantly and SO completely, and then sold the result. It's the hypocrisy that stings.
The example I'm talking about isn't your average fanart, either -- if you put it side by side with the original Squid Game photo, it almost looks like the output of a style transfer program or a PS filter making it more painterly-looking. ("Almost" is the key word, if you look closely, you can tell it's actually redrawn, but still)
> Designing a program to take jobs away from poor and middle class artists is not the same as a middle class person living and surviving from doing fan-art of successful shows.
"This program has the potential to threaten my career prospects" is not the same statement as "this program has been designed in an unethical manner", and you know it. Once again, there is simply no step in the entire development pipeline where anyone involved has done anything wrong. As for the result, the existence of the overfit images (that the model really can regurgitate in far too much detail) is a legitimate ethical issue, but AI devs don't like it, either, so it's gonna be solved, and probably sooner rather than later. It's not intrinsic to the technology as such.
youtube
Viral AI Reaction
2023-01-14T18:2…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | contractualist |
| Policy | industry_self |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytr_Ugw-FgPnivIYVqNeGSB4AaABAg.9ktG2rxtA409kwYo1jF0N4","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytr_Ugw-FgPnivIYVqNeGSB4AaABAg.9ktG2rxtA409l29bhU3-gE","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"indifference"},
{"id":"ytr_UgyKZ57FLJvVAmwqL0h4AaABAg.9kt0rKBzbOD9l8_4GC7hWT","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytr_UgzLxMDzxMQQ0gwQCXZ4AaABAg.9kry1MnRO0J9ksPD3rog_I","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytr_UgzLxMDzxMQQ0gwQCXZ4AaABAg.9kry1MnRO0J9ksSvSqNqrF","responsibility":"company","reasoning":"contractualist","policy":"industry_self","emotion":"mixed"},
{"id":"ytr_UgzLxMDzxMQQ0gwQCXZ4AaABAg.9kry1MnRO0J9ksgZI1I5R-","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytr_UgzLxMDzxMQQ0gwQCXZ4AaABAg.9kry1MnRO0J9kso9gNKyd5","responsibility":"company","reasoning":"contractualist","policy":"industry_self","emotion":"indifference"},
{"id":"ytr_UgxfrehYccz1uXW-OP94AaABAg.9krtCE5MMuE9ksOs6vjve9","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytr_Ugx-HkwsXLYVGobY54t4AaABAg.9krqp36NfCh9m5ercUWcia","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytr_Ugyv3x7PFc4in4CBGLR4AaABAg.9kpoaaB9I1C9ksQmda2NvO","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]