Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Let's all report him. Ai sloppers are thieves without hint of morality, so, basi…
ytc_Ugz-_AqUS…
G
One thing I really really hope the programmers and creators of ai do is create t…
ytc_Ugxir1iNM…
G
3:52 to 4:02 is extremely unrealistic if not the most unrealistic outcome. Those…
ytc_Ugx971WEP…
G
This wunderfull predictions of death and destruction can only be produced by soc…
ytc_UgyppP4QU…
G
Even as a non artist, I have no purpose for AI art. Its utterly worthless. I'm n…
ytc_UgxNFBJIa…
G
Elon is a joker and liar . His stupid cars have nothing with really Tesla scienc…
ytc_UgxqjhHpg…
G
Chest Radiologist now completely Ai Doctor replaced human doctor for normal scan…
ytc_Ugz_XYjQM…
G
At the end of the rainbow we find the SECRETS OF NATURE…and all this ai stuff is…
ytc_UgwgGrs5y…
Comment
So, what is your main issue with it? That the model is able to 1:1 replicate your images (thus a copyright issue) or that it can imitate your style to some degree? You have shown an example of a photo of that Afghan girl being replicated, but that might be due to the image being in the dataset thousands of times? Even then it wasn't exactly a perfect replica, pretty close tho admittedly. Have you tried to create an exact replica of one of your images? If so, what was the result. Those are things I would have liked you to talk about in this video.
Edit: I'm asking because I wonder what your preferred legal and pragmatic outcome would be here. Copyright infringement is already a law, so if someone replicates an image of yours using AI, that image would still be copyright protected and could not be used regardless. If it's merely "inspired" or replicates your style, I'm not sure what the legal way to protect against that would be. But let's say you could get lawmakers to forbid training on your images on a large scale. Would this actually help you? An AI could and will likely be programmed which can use an input image as inspiration, you can check out "ControlNet" to see a somewhat-example of this. So even if the dataset wasn't trained on your images, the end user could simply give the AI a "style reference" and it could on-the-fly make an image in your style, similar to the older style-transfer AIs. Would you then seek to ban replicating styles as well? Because if so, I think that would create a legal nightmare in the opposite direction. Now everyone's art could be accused of copying someone's style, witth the creator of an image possibly being subpoena'd and having to prove exactly how he created the image, which due to random seeds etc. might not even be possible. And of course, the legal system works for those who have money. Would that protect you? I think it would more likely end up protecting huge corporations. Imagine Disney or another big dog would then come and accuse you of stealing their style. Now you would have to either settle and take down all your art or go through an expensive legal process to prove that you are innocent. So it would be basically a carte-blanche to kill smaller artists who can't afford to defend lawsuits. Practically backfiring in a major way. Do you have thoughts on how to solve this problem and the problem in general? I understand your concerns to a degree but I wonder what your ideal solution would be.
youtube
Viral AI Reaction
2023-03-01T14:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugyxi6VnDTkJItqZzW54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwKEreKpA2y-WxgVft4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxaGrUNC0TmMtqaL1B4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugypn1IQUFYq8gJ0LY94AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxPK_doezMylMKaU6J4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"resignation"},
{"id":"ytc_Ugz1JkF_xZPAqyReRLB4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugwlyy01HBFZmpSuJwZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwnHSwWbFCuBHEql414AaABAg","responsibility":"user","reasoning":"mixed","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgyNCUkLUwJfv2XIolZ4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxUiwg3Jvsy_7ru-JZ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}
]