Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Why can't these big companies in need of massive amounts of fresh water, distill…
ytc_UgzxICmIm…
G
@JohnSmith-x3y8h😂😂 why do you keep repeating this like you know anything about w…
ytr_Ugynkrx2c…
G
Ngl f*ck A.I. Artiest since they take almost any money away from Artiest. Like w…
ytc_UgxJ5LEj4…
G
Why are they making the robots sexualized? This is the next step in eliminating…
ytc_UgzZdNxKa…
G
With the birth of General Intelligence Artificial Intelligence which puts the AI…
ytc_UgwGvi9Qg…
G
BE AFRAID! BE VERY AFRAID! ALL RUN AROUND SCARED! YOU HAVE TO BE SCARED! CATATRO…
ytc_UgwJ56HfN…
G
Najpierw stworzył coś na szkodę ludzkości a następnie przed tym przestrzega? Cóż…
ytc_UgyhGz4sR…
G
this is a warning
first they will create the infrastructure for surveillance
t…
ytc_UgwYvihMN…
Comment
It's not going to work, unless you get tens of thousands of other people doing the same thing. These systems are trained on trillions of tokens. If an AI sees 1 billion images during training, and 1000 of them are 'poisoned', that's not going to make a difference alone.
There's a theory that AI systems will poison themselves though. A lot of content on the web is AI generated now. And each iteration of training sucks up the previous outputs indirectly.
You can see this with text models. Get it to write a story with a fairly generic prompt set in a city, you'll end up with "in the bustling city..." "skyscrapers pierce the sky" "a testament to ..." etc. I'm seeing this sort of thing appear in Hollywood movies as well.
If you really want to poison the datasets, one idea might be to generate like 5 AI images for every 1 real image, and publish them all together each time. You'd need to make sure they're not watermarked though, as a lot of them are now to avoid future training.
youtube
Viral AI Reaction
2024-11-06T00:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwEDTOFzN4tsgymC3p4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyFE-4prNcJ7Wqtp3p4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyPDyVCLeWM6TTx4Kp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw3zSlb-5cuqVUYKFR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzXI7MQOVg-s-vdMmh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugx7MvfK5cxAxoSCyx94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwF3MrpGSc16tBWnGZ4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyQyugD4H_PeRTeJPJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzJYnS4cZn0JhL6Qgd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugzehx8UFqdoWowvV1R4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}
]