Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I don't understand how an LLM is able to provide this answer, when there were so…
ytc_UgzMwffmn…
G
What we seek in art is not mixing up what already exists.
An artist will have li…
ytc_Ugx0ZvXMQ…
G
everyone hates AI because all they've heard is slop. I've heard some fantastic s…
ytc_Ugxj9S3Uv…
G
i writing comic about this.. some person come to a litle girl he ask " can u dr…
ytc_Ugxcj157F…
G
thats not the AI’s problem. That is still the dev’s problem. Inexperienced devs …
ytc_Ugxg-98Fx…
G
Ask ChatGPT if Renee Good had custody of her kids and it will lie and say she di…
ytc_UgzVW3sQE…
G
AI, unlike other technological advances, is kind of useless, often harmful, and …
ytc_UgzTqSg-s…
G
You should be able to turn off this problem, as long as it does not physically i…
ytc_UgyiaNUXt…
Comment
Oh hey, computer scientist AND artist (cinema, music, photography) here. On the topic of theft...there technically is no theft happening here, despite, well, theft occurring pretty blatantly in front of us. This, I think, is where everyone's brain goes cold, they don't like hearing that they've unknowingly "consented" simply by sharing their art, and I don't blame them, because it feels like gaslighting, right? Terms of Service, the bible for lawyers at these conglomerate mega-platform companies, on most platforms will have a clause stating the storage of your images on their platform (i.e. posting to Instagram and that image living in Meta's blob storage) implies the right for that platform to use the image however they want. Of course, nobody reads that thing right? Everyone who's mad right now is mad at the wrong thing; a little "don't hate the player, hate the game" moment for artists in the 21st century.
1. Communication of any and all implied consent on a platform is not regulated, nor is the mere inclusion of said implied consent (gotta love unregulated markets), and thus often intentionally engineered to reduce the time spent on a TOS agreement to near-zero seconds, streamlining the consent of every user. Furthermore, the all-or-nothing agreement means you either bend the knee, or don't use the platform.
2. Platforms have an ever-present opt-out mentality: "if a user does not want to do this, they will have to opt out". Opt-in is the only way forward, ethically, but will always be in direct opposition to capitalism as it removes any embedded exploitation of users (i.e. data capturing for "a better ads experience"). In the vein of point one, there's neither an opt-in nor opt-out option for your images and data being used to train machine learning models when agreeing to the terms of service.
3. AI training sets for commercial use, under capitalism at least, cannot be aggregated ethically AND sustainably. The amount of data needed to train a model is far more than any one artist's portfolio of work, which means the rate at which your monetary investment is required scales nearly linearly with however many artists consent to their work being used for the model.
All of the conversations right now around AI content generation are necessary, but I fear they'll die off like everything else, and little to no change will occur. Be sure that you are angry at the true culprit, and not the outcome of their deceit, so you can better tailor your ideas for a solution. That is how we will make progress towards an ethical data aggregation model.
Also, re: watermarks, I would just like to say that, NO, these are not direct copies of any watermark, but rather a generalization that, aside from the image at hand, artists tend to "squiggle here and there". The reason they look dystopian is because they are. Anyone attempting to show you a direct or near-direct port of a watermark is either showing you their own vastly overfit model they trained (less likely), or has fallen prey to self-referential misinformation run amuck online. The waters are so muddy on this topic, now, that I've had to write general responses to the misinformation to share whenever I see it.
youtube
2023-01-01T22:4…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | mixed |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzhQncG4Jw0IIBrsbV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwDRm6RvABIbQLmYuB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwJSwc48YRMZabirrt4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgxXsQhu9yi0tbfzLIJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugy0ykTwrKBfpu-g0414AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxFFFAnYEbP6RZF3El4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugw10P_5wuExTDhf-J94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugx9b2eIbbb0Jww-saV4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgwunNQP5m3BHv1oa7d4AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzsnFI_jcR48erHmdh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]