Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
It's not going to work, unless you get tens of thousands of other people doing the same thing. These systems are trained on trillions of tokens. If an AI sees 1 billion images during training, and 1000 of them are 'poisoned', that's not going to make a difference alone. There's a theory that AI systems will poison themselves though. A lot of content on the web is AI generated now. And each iteration of training sucks up the previous outputs indirectly. You can see this with text models. Get it to write a story with a fairly generic prompt set in a city, you'll end up with "in the bustling city..." "skyscrapers pierce the sky" "a testament to ..." etc. I'm seeing this sort of thing appear in Hollywood movies as well. If you really want to poison the datasets, one idea might be to generate like 5 AI images for every 1 real image, and publish them all together each time. You'd need to make sure they're not watermarked though, as a lot of them are now to avoid future training.
youtube Viral AI Reaction 2024-11-06T00:1…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionresignation
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgwEDTOFzN4tsgymC3p4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyFE-4prNcJ7Wqtp3p4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyPDyVCLeWM6TTx4Kp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugw3zSlb-5cuqVUYKFR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgzXI7MQOVg-s-vdMmh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugx7MvfK5cxAxoSCyx94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgwF3MrpGSc16tBWnGZ4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyQyugD4H_PeRTeJPJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgzJYnS4cZn0JhL6Qgd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugzehx8UFqdoWowvV1R4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"} ]