Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Dude's really trying to convince an ai that was programmed to say sorry when som…
ytc_Ugy9aTom0…
G
Would this then suggest as to why we have been seeing such (arguably) horrible s…
ytc_UgxDF2Qih…
G
Lol snowflakes calling AI art unethical but they’re all for automated driving ve…
ytc_Ugy5fS8Mo…
G
Yeah but wht about the 99.9999% of artist who DONT HAVE A PUBLIC FACE or YOUTUBE…
ytc_UgxwKRez5…
G
@be7256 interesting... and helps prove my point. By that logic we could give two…
ytr_Ugz4Y5NBd…
G
God Father Of AI 😂😂😂 China Is Decades Ahead Of All Western European Countries Yo…
ytc_UgzOiKAgm…
G
So what if we’re against self driving cars and against AI?
What happens if it g…
rdc_i2s7xa2
G
Ai is an art thief :( some people will find ai “art” supposedly in their art sty…
ytc_Ugyqyz1A9…
Comment
I'm an artist and pursuing masters in CS. Just to be clear, I'm on your side, and I stand by most of your points, but I think it's also important to be clear about what data poisoning is: it's a retardant, not a medicine. Training on poisoned dataset is already a research area with concrete results. Already in my university I've seen a paper attempting to do just that (I can't name it lest I dox myself). It's not tested on Nightshade (and I dont have enough knowledge to tell whether it will be effective against it), but the premise is the same: training on poisoned data that looks normal to humans but appears as different objects to a diffusion model. It doesn't even need a clean sample, it claims it's effective even with all poisoned samples.
The only surefire way to stop these products from catching on is to make it illegal; AI companies can (and do) stash old data from the web and can simply retrain their models once Nightshade (or older versions of it) becomes ineffective.
youtube
Viral AI Reaction
2025-04-03T07:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_UgyARWOxFpIt07CqoaR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},{"id":"ytc_UgxeFeGckk4rbklsJ3d4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},{"id":"ytc_Ugy7AuQ5092o_YHqanR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},{"id":"ytc_Ugw9DFZ1-fOEAuiHZAt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},{"id":"ytc_UgwQMEP_-9fN01eGc6p4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"},{"id":"ytc_UgzgJb0ez6qIuW0PCR94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},{"id":"ytc_UgyeDGwPE4STa83IwZF4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},{"id":"ytc_Ugzs9QXLxh3IgPT5rTZ4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"none","emotion":"mixed"},{"id":"ytc_Ugxpw5dWpWWhvcYRmfR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},{"id":"ytc_UgysFIaBzE795GATKzR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}]