Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
i don't think ai will ever be conscious, as any future ai will still work like s…
ytc_UgxbKitwc…
G
Thank you for your feedback! We're glad you enjoyed the video. Remember, for mor…
ytr_Ugy9WhYec…
G
The only thing AI is good at is getting delusional while talking to it and givin…
ytc_UgyNSiUCY…
G
I mean, I have heard bad things about public defenders, primarily about their la…
ytr_UgzyAKsxe…
G
imagine the pop is become AI , president of usa and china too , and a AI version…
ytc_Ugzi2CWVh…
G
if AI is so good right now, how come we cannot bring back the manufacturing in U…
ytc_Ugwpb_twa…
G
3:19 what do you mean autopilot doesn't mean, the car can drive itself? Is it no…
ytc_Ugw0t8t-o…
G
Yeah I am definitely gonna fv<k one of those in my lifetime for sure…. Just to t…
ytc_UgySYu4ir…
Comment
I'm not sure how effective this kind of tool is actually going to be. These things rely on two lines of wishful thinking:
The first is that it kind of only works if an unbelievably large amount of people start using it. Sure, you and your buds can start using the filter, and maybe within a year you've collectively posted 300 poisoned images. That's cool and all, but that is nothing compared to what is likely millions, if not billions, of unpoisoned images in a dataset. Unless millions of people start using this *today*, this affects nothing.
The second is that I can't see this being a major problem for AI image generation training in general. Nightshade has been out for over a year now. In an article I read, it said that you really only needed a few hundred images on a particular subject to effectively poison a dataset. If Nightshade and similar tools are supposed to be this effective, then why are we not hearing about AI image generators crumbling due to the influx of poisoned images? Hell, OpenAI released a reseach paper not too long ago about how their biggest issue that they were tackling wasn't image poisoning, but incorrect captioning, and DALL-E only seems to be getting better and better. The same seems to be going for Stability AI, and likely most of the other big companies. This makes me think that since its been a year since Nightshade's release, these companies have probably already figured out how to counteract the poisoning, whether it be through some detection method, or by limiting the minimum age of images used in their datasets to before the release of Nightshade.
While this is a neat tool to use and all, it likely isn't going to be effective in the slightest. You'd need to get every single person posting images to the internet to use it, which isn't going to happen. Alternatively, you could hope that platforms would automatically use the filter on any images hosted, but that also isn't going to happen, since a lot of the bigger platforms either have sneaky bits in the ToS or EULA that say "we can do whatever we want with what you post here" or are using the posts on their platform for their own training. Some filter that probably isn't even effective isn't the future of fighting back against companies scraping images from the internet. Unless there is some major policy reform across multiple countries, there is literally nothing we can do. Big Data has been a thing for decades. This was going to happen eventually.
youtube
Viral AI Reaction
2024-10-26T15:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugyc3FQkqAg96PNsal94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgynQGBmTQIcAAAttWx4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_Ugyqpl7NK0l-SNmzHz54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"approval"},
{"id":"ytc_UgzyilARXV6CsoR8HVd4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxPYPpVAvqLIe-o75d4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxWjAgReIPGrt-bTqN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugy7eQBWcb3LDX005L94AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwNLHhVJeIDCkaqWxB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxkwLnuRcx-94QSlpJ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx4_F9CMdbcbK5_BZZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}
]