Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
From dock to dock delivery they'd have automated forklifts with barcode readers …
ytc_UgiDyL4J4…
G
Employers are trying to make their employees train the AI so everyone can be fir…
ytc_Ugy7ktRpW…
G
Hi yeah regarding my complaint earlier like I’ve been complaining ever since the…
ytc_UgzHnqXgr…
G
When it comes down to it if making money was the only reason to become an artist…
ytc_UgzRllBjf…
G
Wouldn't consciousness technically be AI writting it's own code?
If text AI can…
ytc_UgzF9jIjm…
G
Still no robot carpet layers, robot plasterers, robot plumbers, robot roofers, r…
ytc_UgxxrIGR9…
G
AI is evolving beyond automation. We are entering the era of B.i.o-N.a.n.o N.e.t…
ytc_UgxNfc7tS…
G
Technically, chatgpt defined a lie as being intentionally false or misleading. O…
ytc_UgxzEb2MC…
Comment
It is unfortunately a cat-and-mouse game. Every time you come up with a solution that prevents Image scraping and training, a AI engineer is working on ways to overcome just that.
Poisoning works because of how it manipulates what the AI sees when if tries to learn from the image.
Edge detection fails when it sees a gargled mess, hence the poisoning works.
BUT... what if the AI does something to the image that normalises the image before processing it? Like, you cant see the poisoning, meaning a human isn't sensitive enough to see the artifacts. What if the bot "fixes" the image to where is doesn't have that artifacting before being fed into the model?
But i agree with the sentiment. If its used without permission, then it is stolen work.
And don't even get me started on using the generated images to harass people. Thats just evil.
I think a paradigm shift is needed and poisoning isn't really the answer. It works... for now... so we use that while the next solution is cooking.
youtube
Viral AI Reaction
2024-10-23T11:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | industry_self |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgyW9vFRi93R8p4YYaF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugx0DIiuMV6CDgwC3xt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyHzybdo8CBkQJp-EF4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgzxByI1M0ovhgcpjYB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzGPEsJJl9QbrNArp14AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx_4rbncV3qMCh8wKV4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxBw3yh5lBxrkesYvV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_UgxMYBrZ3ehiZZE02nd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxOtUGOTiL83en-mOh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwOqvSQyElZasbXbk54AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"approval"}
]