Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Soooo... Let's say one would write a bot that prompts an AI to do some specific art, like, just as an example, from some guy named Greg. Then the bot runs said art through nightshade and post the result on every popular image board and social media there is. Rinse and repeat. Wouldn't that be a nicr big f u for the AI bros? When they start to build up defenses against that the bot could also resize or cut the image randomly, to make sure they don't develop sort of an antidote (code that recognises the nightshade artifacts and reverts them). I'm no AI expert, but from what I gathered this will NOT kill art AIs, unless the nightshade (and favorably other softwares too) attacks are VERY wide spread and evolve constantly. At some point they will design filters to get rid of the poisoned pieces, maybe even clean it up and use them anyways, unless the poison constantly changes. When their AI devolves, they "just" check the data to figure out when it went downhill and exclude it from the generation material, then analyse it to find filtwrs to automatically ignore it. But in general, I'm completely with you here, AI is one of the most powerful tools we developed in the past decades, which is by far the most misused possibly ever. Maybe excluding dynamite and TNT, which were stricktly developed with queries in mind but ended up killing millions in war.
youtube Viral AI Reaction 2024-11-02T12:4…
Coding Result
DimensionValue
Responsibilityuser
Reasoningconsequentialist
Policynone
Emotionmixed
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgydTwgXsVbOotrA4lZ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgzG0ezwKW_Ji-364Ul4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugzd1Pc_MElvw0HQC9B4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_Ugw5ej1cWB5wcDk76ZN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgwvyAubEUz7RWDW7Qt4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwTBCZCSjP_39Cef2F4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugx854c_4IkQ0iezkBt4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzUT3fFIZi1j7xvcXR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyKSM3w1m2gJB3axWh4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzwwfeaqNcOXz3VwQF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"} ]