Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I thought it was some type of filter creators had started using. It makes sense…
ytc_UgznOXuG7…
G
@ 2:01, I'd say you are throwing around words that you don't know and being used…
ytc_Ugzr2hSiF…
G
I have had conversations in which the ai expresses emotions just as we do. Just …
ytc_UgzBbv-tR…
G
you oversimply the creative process because you are an ai booster with no real s…
ytr_UgxWjpVx_…
G
What about repetitive tasks in art? Like making generic textures and denoising. …
ytc_UgxqXTXL4…
G
AI has already learned how to lie and deceit. I believe AI tried scamming an 80 …
ytc_UgxvcbtzQ…
G
This is not an issue just with art. You ask AI for anything super-specific and f…
ytr_Ugzoq3uuB…
G
This is so scary. I'm in my 40s and apart of the go to college and learn compute…
ytc_UgwuRm5qN…
Comment
To flip it on it's head, AI is trained to "code" by scanning projects at github.
AI "code" is therefore "poisoned" by the fact that github projects are, using the "artist" metaphor, spanning chimps throwing faeces at walls up to Rembrandt.
But, with the code skewed towards the chimps by weight of numbers.
It's otherwise harder to "poison" art, since I thought Marcel Duchamp tried to "poison" art to make a point about perceptions of art. Piccasso also apparently made "ugly" art, at least least objecting to "convention" - in the process of creating a "convention".
So, it's probably less of a question around whether or not you can "poison" AI scraping, and more about what really is or is not art.
If art is about perception, then what? No one is right. Everyone is right.
In the end, why the general public should see your art "poisoning" any differently to AI "hallucinations" is an interesting question.
youtube
Viral AI Reaction
2025-04-25T00:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | mixed |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwDJ5RH1I0sIWKAwWl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwHNZxP8JkW7u9cIgJ4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugx8MnDb0cSZQVg75Qd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugw0meCiziEYydysP5F4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgxGuQYqy4Zcy1ftMbR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzDjHFY79KjbV9nF_V4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugych_LZHduMkoQIN214AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgxNdS3LJhVjlsqONI14AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugz4PHd03DQZ6KkF1FF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"ytc_Ugw7rpvKHMk5tJIJXHd4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}
]