Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
These are the dumbest of the "AI bros", these clowns do not represent us in the open source community. 1: If you want to cause real problems, nightshade isnt how you get there. It works fine against finetunes of SDXL, but if you change model, it cant work because the way the model thinks is wildly different. Have someone make a script that spams out unlimited totally garbage quality images from an AI model of your choice (Put steps real low), stretch em a bit, low quality jpg them, just generally make it as awful as you can, and put em on a site. Have a couple hundred people do this. Generate it on specific types of images so it completely overwhelms the real training data there. Tell them not to crawl it. Internet archive will respect this, so they dont have to store all of that. Big evil AI companies however just gladly ignore that and now whenever images with those specific 100 or so topics are generated with that dataset, something horrible is created. Thats also how you would protect your own art. 2: Make a site for other artists that requires a captcha to get in Scrapers dont go through even the most basic ones. NOT recaptcha, that not only isnt an actual captcha, its proven spyware. If thats not enough, google will probably just ignore their own captcha anyways. This isnt foolproof, but its good for the world in many ways. Why am i telling this to you? Because they pretty much DDOS open source software instead of respecting basic internet etiquette so im just leaving this here.
youtube Viral AI Reaction 2025-04-01T17:0… ♥ 2
Coding Result
DimensionValue
Responsibilityuser
Reasoningconsequentialist
Policyliability
Emotionoutrage
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[{"id":"ytc_Ugx0BR9jCqWFT_dU9UF4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},{"id":"ytc_Ugw8w_22VBvfUITkpmh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},{"id":"ytc_UgzWj25eEupIztInYPF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},{"id":"ytc_Ugyx7AEX3hS00c5UFb94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_UgyRCqJ4jREPbILfzsJ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},{"id":"ytc_UgwN_2fpgkBSmInrfzx4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"outrage"},{"id":"ytc_UgyCK2IZr2XlRq5cIZV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},{"id":"ytc_UgwOqPRJ8De43P7ZcgJ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},{"id":"ytc_UgxyIZvwgnr8_Nt0vX94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},{"id":"ytc_UgyI3s6p6y4Nj5DwRCZ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}]