Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
There is absolutely no way we can safely expand this technology without solving the very human problems it exposes. One of those problems is there is no way to stop this technology from expanding. Human nature will always push the envelope. 0:05 AKA how to generate the most painfully generic slop that has ever been prompted for. Asking a GPT model to write you T2I prompts for you is the equivalent of a child coloring inside the lines of a coloring book, and deciding they are an artist. Unless you are literally too young to spell your name, nobody is impressed. 2:20 This is dangerously shortsighted. The important thing to teach people is to stop believing everything they see in pictures, and to hold people accountable when they are caught in a lie. Teaching people a suite of skills for identifying fake images, which will be completely obsolete inside of six months, only gives them a false-confidence, and makes them easier to deceive. Dunning-Kruger, etc. Case in point, attacking general models' ability to draw hands is largely a waste of energy, now that most people engaged in serious propaganda have dedicated hand inpainting models, of some description. You might as well try to cripple a modern military by cutting off their access to chariot wheels. 2:50 This line of thinking just another way that anti-AI activism more often turns people into the unwitting pawns of big AI companies, as they try to drive the smaller or more honest competitors out of the market. It actively makes it harder for organizations that try to lawfully and ethically license artwork for use as data. Unless you are a major corporation with a massive army of web-spiders and a very loose interpretation of the no-index laws, you are basically forced to draw from the same ever-shrinking pool of images that can be scraped cheaply. This does little to stop bad actors, but does unfairly penalize anyone trying to compensate artists fairly for their work. Instead of teaming up to challenge the real enemy, the poor and the frightened waste their energy fighting each other. The last thing that artists need in this pivotal time is a large class of bitter and unemployed computer scientists who seem them as an enemy. 3:41 As a person who invests several hours a week moderating user-generated AI content for deepfakes and other unlawful material, I can attest that the psychological burden is real, and we use every tool available to lighten the load. HOWEVER, every report we take must continue to be reviewed by a real human being. Computer vision technology is advancing at a breakneck pace, but it simply is not ready for even that small level of responsibility. Just look at the way Google's ContentID platform constantly fails, and they are the most funded AI lab on Earth. You know how we make those tools better? Possibly even good enough that I can trust it delete images without personally seeing another **** ever again? We improve general computer image technology in a safe environment, that is easy to test on. That is, we generate images with AI, to test it's ability to correctly associate words with shapes and colors. This is the minimal damage test case, and people are already finding ways to use it for evil. Imagine what would happen if we started releasing autonomous LLM agents, able to affect the physical world directly? (Anyone letting chat GPT write their business emails without proofreading them is an idiot.) 4:50 So you're so fed up with people using AI to modify drawings and deceive people, that you set out to use an AI image model (Nightshade is just a variant of Stable Diffusion) to modify your drawings and deceive people. You have become the very thing you swore to destroy. If you actually don't want labs studying your work, you should clearly mark it as poisoned, preferably in your noindex file, instead of trying to trick people into collapsing their models. The ultimate goal is to enforce the policy you give in that file, and not just to prank scientists, right? You did make sure that you are only uploading your art to places where the TOS doesn't allow scraping, and they use a proper noindex, to deter spiders, before resorting to vandalizing your own work with an AI watercolor filter, right? You're not deliberately poisoning dataset that are legal to use, right? Right? 5:20 "I even disguised it as a meme, to hopefully maximize it's reach." You disgust me.
youtube Viral AI Reaction 2024-10-29T05:3…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugx3houHG9lAlsOu1ml4AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgxJxhfo7yevih0hbpZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgziW3r5ivhCZD9N2YF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugyk3E4Pu_7OCeKe6S54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgzJyBSKAlRWUFB6XZh4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"mixed"}, {"id":"ytc_UgyD1T_0FnCrmzI0B1p4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzfzZB41SrXwFV3XRZ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzGEs7fMUaPmMkAjtN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"}, {"id":"ytc_Ugz1jUYLzFUx0ReHtPV4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"}, {"id":"ytc_UgzzjKtSQMOYrolZOhB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"} ]