Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The trick here is even freelancers, who could really benefit from having certain…
ytr_Ugy5woM8i…
G
Ai has no emotion. Emotion regulates intelligence. AI will forever be unstable…
ytc_UgxfAceeg…
G
"AI art bros" are much less annoying than twitter artists crying an complaining …
ytc_UgythtogM…
G
The English language has existed for a while. Neither auto-correct or LLMs can g…
ytc_UgyDrnOHQ…
G
Leo is a hypocrite who still uses private jets for travel.
Can't preach about ch…
rdc_esqr2v8
G
Humans being able to focus on creative intellectusl jobs? Dude, you even paying …
rdc_jffd2qz
G
I DON'T SOUND LIKE THAT OR TRAIN ON NONCONSENTING DATA :C not all AI people are …
ytc_UgxUvI1TQ…
G
Good! Now can we get a ban for people using Ai to replace jobs?! We need that to…
ytc_UgwahyktE…
Comment
Hi, I actually make fine tunes for a SDXL based model called Illustrious by OnomaAI, and would like to clear something up:
There is a common misconception that if I have to resize and process the image to crack one of these dataset poisoners, that somehow that effects the quality of my dataset. This is not true. Why? Because the training process takes place in 64 or 32 pixel increments. Unless your image is: 1. in a resolution divisible by 32 (or sometimes even 64) and 2. in a resolution which is a variant of 1024x1024 or 1536x1536, I'm going to have to do that processing anyway. The silliest part is tagging is the only part I even have to think about, as I can set up a processing pipeline to convert to a more easily transformed file type, shrink to just below resolution, and then upscale with something like lanczos (lanczos is a bit old now but it's the example). These methods, including the U chicago paper, were very quickly cracked by people like Lyumin Zhang in several ways, but one of the most simple was saving screenshot as jpeg. While I personally haven't ever knowingly interacted with anything using these filters, I know many people who have, and also that's the problem. It's so ingrained in the pipeline that I would never know if I interacted with one, as I wouldn't see a difference. Maybe a general webscraper for google or OpenAI would be hit by this, but if so, I don't know why they pay their engineers at all.
I am not actually, if I add something to dataset, trying to train it on the pixel details of a 4k image. I am at best training it on details of a 1k image, and even then we have upscalers which correspond to line quality for reconstruction. The goal of adding a human made image to dataset is usually to imbue some property of the composition or style or anatomy. We're actually beyond the point already where average human made pixel detail is inferior to the best AI outputs, which is why things like cyborg datasets exist (synthetic set is basically a myth unicorn that nobody really currently believes in, but cyborg is somewhere between 5-25% synthetic) if the style which is intended isn't something you can find professional grade sources for.
You may ask, well, what am I supposed to do? Just fucking slap a watermark across the character. You can still recreate it, kind of, but the reconstruction there is far more destructive and more importantly time consuming. This stupid 'filter' shit is astrology for people afraid of ML. If you really want, I can make a set, put that entire set through this filter, and with an automatic pipeline show you that even a full set of this demon-salting has absolutely minimal impact. We have had the technology to actually stop your images from being used for ML for a long time - just put a watermark over it. As much as it'd limit my ability to construct a set for if I ever did want to do a twitterbound art style model for some reason, I'd rather my time be more difficult than have my intelligence insulted by this collective delusion.
I hope this was informative and helpful.
youtube
Viral AI Reaction
2026-01-09T11:3…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[{"id":"ytc_UgzSS3iqW-45cH8Hxpx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgxvAbhOZQdrc-3jDpd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},{"id":"ytc_Ugxjg6vug0DaF-SM4jN4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"mixed"},{"id":"ytc_UgzuoVBCIh99FhoZ15x4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},{"id":"ytc_UgzWuchtnT8P21CV7FJ4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},{"id":"ytc_UgzhIBT5QbZ_TS56IXl4AaABAg","responsibility":"company","reasoning":"mixed","policy":"regulate","emotion":"mixed"},{"id":"ytc_Ugz3TEHuw6c_q95kJ_Z4AaABAg","responsibility":"none","reasoning":"unclear","policy":"industry_self","emotion":"indifference"},{"id":"ytc_UgwDKaZBK-b4N5nMkh54AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"},{"id":"ytc_UgwjrZxPTwyPs4f2x5t4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"},{"id":"ytc_UgzP5jJDG6WJ0dbYc0R4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"})