Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
what "HUMANS" know about consciousness isnt worth mentioning. We are talking abo…
ytc_UgzyYlm7P…
G
well we make up 'rights' for ourselves due to evolutionary reasons that shaped i…
ytc_Ugg9LqPx7…
G
What's funny is anybody who has a problem with this is too blind to see that the…
ytc_UgyUiSDTi…
G
Don't know that there is a solution. The AI genie is out of the bottle..…
ytc_UgzdI_oPF…
G
The intro theme of Detroit: Become Human playing in the background… I really lik…
ytc_Ugzqy4Tuk…
G
Would Angel Engine ever have hired a real artist if the AI generated version was…
ytc_Ugwjgqw3t…
G
I'm a fan of AI and I like having access to generative art. But credit where it …
ytc_UgwutS9DC…
G
Like my first thought when I heard about the concept was "Are we sure that it's…
ytr_Ugw_9CiyD…
Comment
This really isn’t going to work long term. The whole reason AI datasets are so large is to fix any weird holes anyway. A lot of AI training data is already poisoned by SEO keywords and algorithmic BS.
Looking at the Nightshade article, they had to train a Lora on 300 poisoned images to really make a massive difference - but that’s a dataset of ONLY poisoned images. If there’s 10,000 poisoned images in a dataset of 50 million, it’s a statistical anomaly.
Artists would better spend their time working with companies like Invoke AI who want to help safeguard artist rights, and find ways to make say opt-out forms and such to have their dara excluded. Some AI companies already allow artists to opt out - Stable Labs removed Greg whatshisname from their dataset at his request.
Yes, you’ll never be able to prevent unscrupulous people from training Lora sets or something. But software companies have likewise had to learn that DRM never fully prevents unscrupulous people from pirating software either.
youtube
Viral AI Reaction
2024-11-04T14:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgyMd8RbSEnUXJhChFh4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxbEBMYtTR1dnzrJJd4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"disapproval"},
{"id":"ytc_UgxLwMI2gLsQyF5FqjR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwkAOtFoAVOVNvzLd14AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"disapproval"},
{"id":"ytc_Ugz4_mwpA8tmAylj-Ax4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugx3NmgZO6I6xwAfJi14AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugyu-l4SGUvXOyQjZIN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgziofpHglQxBhnrOR14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw28z27FcR-uTmP2x94AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxGcaMW7bB-1bDbhSx4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}
]