Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Elon pretends he's part of an industry-wide effort toward self-driving technolog…
ytr_UgxRKBKrS…
G
Genshin has been in a really sad state for years now... and it decided to make i…
ytc_Ugzg7ZtEo…
G
I remember once seeing someone on discord being mad that someone posted a video …
ytc_UgzsaeeNN…
G
lol 😂no wonder no has jobs. Too bad the Politicians we have won’t retire because…
ytc_Ugw-Og6yH…
G
It isn't even that bad. They're just fixing up a video, it's the EXACT same if t…
ytc_Ugz5m6yFZ…
G
Seems like Diary of a CEO keeps bringing on the doomsday AI scientists who only …
ytc_UgwZEMQeX…
G
Lol the idea that AI will EVER be "conscious" is so cooked.. The only thing that…
ytc_UgzVjZS1N…
G
I saw a video about ChatGPT and the Turing test. And one of the questions was if…
ytc_UgxR3q8TZ…
Comment
I tested this out of curiosity, not malice. Using only two of the supposedly “poisoned” images, I was still able to get an AI model to reproduce her art style. This isn’t because the protections are fake, it’s because of how and when they work. Nightshade-type “poisoning” only affects the training or fine-tuning of a model that actually uses those poisoned files. It does nothing to models that were already trained on clean copies, or that pull unpoisoned duplicates from elsewhere online. And when you generate with a prompt or a couple of references, the model isn’t learning in that moment; it’s just using patterns it already contains. Modern models can latch onto a style from very few examples, which is why two images were enough.
There’s also a backfire risk. If poisoning alters images in consistent ways, a training pipeline can learn to ignore that noise and become more robust. Some poisons add distinctive color shifts or artifacts that act like extra signals, which can actually help a model generalize. Researchers study these adversarial tricks, fix the weaknesses they reveal, and the next generation of models gets stronger. So while poisoning might disrupt future training runs that include those exact files, it doesn’t block existing models, it doesn’t erase what’s already baked in, and in some cases it can even help models become more resilient.
Bottom line is I could match the style with two poisoned images because the protection doesn’t affect generation, doesn’t touch already-trained models, and can sometimes provide more signal rather than less.
youtube
Viral AI Reaction
2025-09-10T06:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugxa2aMuNDkpB02VmSh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwKpEaEM6yRrKWHy8F4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgzSTSqwHheaD6lxyS14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugx7uTrMq9b7EStYlhp4AaABAg","responsibility":"none","reasoning":"virtue","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgxIJiWygubq5AeLB4d4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzAuh0wvcfXehuWYtJ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyHR8SXrA_Maj_A9xN4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugy0CBEoE_MClQ7kDD54AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"disapproval"},
{"id":"ytc_Ugxi0dF1no-jwlxUMEt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwiJPL6zQATTl6dW5h4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}
]