Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Thanks for educating us on AI. Safe words are low tech, just like The Club, for…
ytc_UgywbXgNV…
G
@AsianDadEnergy, I work in research, design and build custom hardware and soft…
ytr_Ugw_bs2RB…
G
The problem in most companies is the CEO mindset. These people are not engineers…
ytc_Ugxqeo45I…
G
So if developing AI might kill us all then WTF are you morons doing it? Pull the…
ytc_UgyUU8k4p…
G
Midjourney is definitely trained on Laion 5B. It’s all discussed in their offici…
ytc_Ugw2cNHz4…
G
Haven't been able pay for healthcare for the last 10 years nothing new there. I …
ytc_UgzFqXGpN…
G
sorry, AI is ALREADY being used for domestic surveillance. how do you think the…
ytc_Ugy29IU4P…
G
hmm, simple fact of the matter is: this is fiction (for those who didn’t bother …
ytc_UgzRUta-O…
Comment
Yep. I see AI image-generating software mostly as tools. The company making the tools is not really the one that bears the responsibility for how the tool is used. Especially when it comes to custom-tuning models.
When it comes to some of the other arguments in the video, paintings tend to have signatures in the lower right corner. If a human artist signs a piece in the lower right corner, is the placement of a signature plagiarized from all the other artists who signed their work, or is it just, well, a common feature shared by a lot of paintings? If a human artist is commissioned to make a painting in the style of Picasso, well, that image will look somewhat like a Picasso (assuming the artist is any good). Why would commissioning an "AI artist" to do the same suddenly be a completely different thing?
Finally, the AI generations indeed are completely unique. There is not a single pixel in them that has been copied from the training material. Instead, the AI art generators, much like human artists, just have an "idea" of the "essence" of a given style, item, or whatnot - just coordinates in a sea of ideas.
Finally, about corporate greed... I actually do not think AI art generators are much of a business for the companies creating them. Sure, they do charge some for using them (in some cases), but that likely barely pays for the computing costs. Nor are they even the main products. The goal is much larger, general artificial intelligence, that not only knows about art but also *everything* else.
The art models are kind of a side effect of a much larger research effort, something interesting that was found almost by chance when researching computer vision. In order to make stuff like robots that can navigate in the real world, they need to recognize and understand images from their cameras: that's a chair, that's a table, that's a door, that's a human. In order to do that, the models needed to be trained with those millions of images to recognize, well, pretty much anything. And to understand what was going on inside the model, the researchers created ways to visualize what those concepts looked like inside the AI. And out came... images. Very odd ones at first, but recognizable enough to do research to make them better. At that point (and likely also now) the main driving force behind this was, well, curiosity: "let's see how good we can make these!".
Finally, when it comes to artists losing work, yes, it will happen. Most likely the field I mainly work in, corporate and commercial stuff will be one of the most affected in the short term - no one really cares who or what makes an ad, all that matters is that it's good and cost-efficient. There are no auteurs in corporate communications (as much as we'd like to think so, heh). Personally, I use AI tools to speed up my work - I can now produce more, and better stuff.
But it's not just Art. AI is already good enough at coding at a somewhat human level - better than I am at least, in many cases. AI is getting very good at writing too. And pretty soon it will be good at, well, everything. Anything humans can do. I mean that literally.
youtube
2022-12-15T21:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | industry_self |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytr_UgwEPDhgms-iGYX90it4AaABAg.9jUCr-1khvh9j_q_6mVVDg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"approval"},
{"id":"ytr_UgwEPDhgms-iGYX90it4AaABAg.9jUCr-1khvh9kLbfWY6z9_","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytr_UgwEPDhgms-iGYX90it4AaABAg.9jUCr-1khvh9kP_Oak6XFb","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_Ugwk7x5Zj-XhHEeZGcB4AaABAg.9jU5FUlS3b_9jfYnG67xDn","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytr_UgzvlGilHNh2ANJ7hVB4AaABAg.9jU2S_t2DrZ9jUZrBH0gr0","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"fear"},
{"id":"ytr_UgzvlGilHNh2ANJ7hVB4AaABAg.9jU2S_t2DrZ9jUdE7Jiet3","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"fear"},
{"id":"ytr_UgxDm4ednhqW_k-VSh54AaABAg.9jTxrWsn68RA62ZknFRrjF","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytr_UgytxyU2B5R3uxeJmTR4AaABAg.9jTvtChlJaT9jTyXH6G0Ue","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytr_UgytxyU2B5R3uxeJmTR4AaABAg.9jTvtChlJaT9jU0KxAHwhb","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"mixed"},
{"id":"ytr_UgytxyU2B5R3uxeJmTR4AaABAg.9jTvtChlJaT9jU6faYsnXF","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"approval"}
]