Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Check out "Foundation" and the robot who is running it all - R. Giskard Reventlo…
ytc_Ugwm56BkQ…
G
The nature and limitations of LLMs have nothing to do with this. Google is using…
ytr_Ugx4qRRGK…
G
Antis just wont shut the hell up about AI art, and here's why: Engagement money.…
ytc_UgwAJqdSx…
G
SK was one of the few parts of the "developing world" that was actually developi…
rdc_dv01iyu
G
How about you pick a less weak argument against your view point to pick apart? Y…
ytc_UgxK7q2yP…
G
It's all about the vision. Who is the creative, the artist or the computer? My r…
ytc_Ugynq_jb4…
G
I hate ai for the single fact that it screws SO MUCH any thing i try to find on …
ytc_UgzetBSkO…
G
If you think automation is going to take jobs, then you clearly have not worked …
ytc_Ugz7V3uYo…
Comment
I don't know if this is going to have the effect you're going for - these 'poisoned' works of art, with tags that accurately reflect what a normal human would see in them, are the *most valuable* kind of training data for an AI, because they teach the AI what kinds of artifacts are irrelevant to a human viewer. A better approach, I think, would be to post normal art with completely incorrect descriptions, like if you had tagged that hand picture with the description "a beautiful fantasy landscape by Greg Rutkowski", or to post art with correct tags that also have the kinds of weird artifacting we see in AI art, bad anatomy, discontinuous lines, etc. At the scale these companies are scraping the internet, they can't possibly catch mistakes like this that seem fine if you only look at the art or the description individually, and AI doesn't know what things mean, it only makes connections between words and patterns of pixels, so muddying that connection is the best way to break the AI. Good luck!
youtube
Viral AI Reaction
2024-10-20T20:2…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxQBtqqAznL1BIpjL14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwlBJpvwdpEIei17ct4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzIlMgjZPi0v6q9HTB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzKBL0HZ7CRGxewgKl4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwPVGHhdpkDFLMZ0RF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy7OxeTAc-miY8Fuxt4AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxnZ86ECZXLl0ThoCR4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugy8ID7Oi5UduajAH5t4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_Ugzn60t9hF3UgOVVcnx4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxIH6ysLpbWM0opQqB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"mixed"}
]