Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I tested this out of curiosity, not malice. Using only two of the supposedly “poisoned” images, I was still able to get an AI model to reproduce her art style. This isn’t because the protections are fake, it’s because of how and when they work. Nightshade-type “poisoning” only affects the training or fine-tuning of a model that actually uses those poisoned files. It does nothing to models that were already trained on clean copies, or that pull unpoisoned duplicates from elsewhere online. And when you generate with a prompt or a couple of references, the model isn’t learning in that moment; it’s just using patterns it already contains. Modern models can latch onto a style from very few examples, which is why two images were enough. There’s also a backfire risk. If poisoning alters images in consistent ways, a training pipeline can learn to ignore that noise and become more robust. Some poisons add distinctive color shifts or artifacts that act like extra signals, which can actually help a model generalize. Researchers study these adversarial tricks, fix the weaknesses they reveal, and the next generation of models gets stronger. So while poisoning might disrupt future training runs that include those exact files, it doesn’t block existing models, it doesn’t erase what’s already baked in, and in some cases it can even help models become more resilient. Bottom line is I could match the style with two poisoned images because the protection doesn’t affect generation, doesn’t touch already-trained models, and can sometimes provide more signal rather than less.
youtube Viral AI Reaction 2025-09-10T06:5…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugxa2aMuNDkpB02VmSh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwKpEaEM6yRrKWHy8F4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgzSTSqwHheaD6lxyS14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugx7uTrMq9b7EStYlhp4AaABAg","responsibility":"none","reasoning":"virtue","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgxIJiWygubq5AeLB4d4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzAuh0wvcfXehuWYtJ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyHR8SXrA_Maj_A9xN4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugy0CBEoE_MClQ7kDD54AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"disapproval"}, {"id":"ytc_Ugxi0dF1no-jwlxUMEt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwiJPL6zQATTl6dW5h4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"} ]