Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Watch the movie Colossus: The Forbin Project. I wonder based on humans today if …
ytc_UgxylMc_l…
G
AI chat customer service (“virtual assistants”) and automated phone menus are bo…
ytc_Ugydtq6Y9…
G
I feel like AI is a tool but the way its being used right now for just bad purpo…
ytc_UgwPV6HrG…
G
Let's also talk about power consumption for AI and how the demand increase well …
ytc_UgyJjKbut…
G
The first one if you see closely on the back there is a guy walking on the backg…
ytc_Ugybr-2Ez…
G
Its funny because if all humans were actually honest we would have no need for t…
ytc_UgxONrmAm…
G
Please explain the massive data systems that run AI. I do not understand how man…
ytc_Ugy05cByp…
G
Ever since google added its AI output into the search engine results, I find it …
ytc_Ugy9Loqq9…
Comment
Expanding on the efficacy of Nightshade and related techniques:
It’s worth noting that there’s a bit of nuance here. The problem with peer reviewing something like this is that in machine learning there’s an effectively infinite number of approaches, so they have to test a few major approaches and then call it at some point; if you’re a researcher you can’t literally test every single possibility. It’s just not practical.
Open source development and exploration can be thought of as a sort of search function (a bit like an ant colony). Have you ever noticed how your house has wells, and looks impossible to get into, but ants somehow seem to find their way in if you leave watermelon out? Open source development is kind of like that. There’s not really any stopping the most dedicated people. With that said, you can absolutely make it more difficult than it’s worth for the most unskilled/undedicated members of the community, and to be totally fair, I’d argue that’s not a bad outcome, because there might be an argument that the extremely dedicated members (who generally have workflows that they spend significant effort on, and go far beyond just prompting), are probably closer to collage artists than AI prompt engineers anyway.
With regards to data collapses:
It’s a similar story here. A major issue with a lot of studies on model collapse is they generally do a pretty straightforward strategy: They take outputs from one model, and train another on it. In this naive scenario, yes, you cannot keep a model going without new data from humans. The problem is that we have so many more tools to work with now, that it’s just not really a practical limitation to anyone who has good practices. Things like entropy-based sampling (ie: using various agentic strategies to diversify the output distribution to look more “natural”) to using classical algorithms to produce conditioning (ie: canny edges, depth maps, etc), or even putting together fully indirect pipelines (such as an LLM generating images with non-AI tools), and failing all of that, even just reinforcement learning appears to be enough to keep the models improving ad nauseum. If public data becomes inaccessible…I’m afraid the flywheel has already started, and people have gotten too used to having AI models. There will always be a way to continue progressing them.
That doesn’t necessarily mean that it’s completely pointless: To an artist there may be a big personal difference between “Oh, AI is getting generally better” versus “AI is using my art to get better”, so by all means, if it makes you feel better, go nuts. You’re well within your rights, and it’s a totally understandable course of action.
youtube
Viral AI Reaction
2025-03-31T15:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugzz5MovGShyQTp6w6B4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxUozFeTSLm2Yi0p9t4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugw8ojk6BLVFn4cyRYd4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugwvdq1A0w2z80AIydJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzRv1IW4WhxY-PoiVB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwY_fAnZo9UpYX1gwx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx_RcmsPaWES0CdfYZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwFxRsLWyG6PkG8dAN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwrYVRwZyx4eUI_Jk54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugynd0Wlhfkfcpdf_fV4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"}
]