Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Whatever that AI bro done is besides the point. NightShade has already been defeated. It is dead and useless as of Wednesday. A group of researchers just put out LightShed which has a near 100% success rate at detecting nightshade and almost as good at undoing it. I would strongly recommend any artist trying to protect their work use Glaze instead of whatever ends up replacing nightshade. Glaze can definitely be defeated too, but it is a lot safer option for legal reasons. What makes Glaze a fundamentally better idea is that developing a tool to break Glaze means you are aiming to pirate content. Institutions are not a lot less likely to green light research into cracking DRM, and companies will think twice about internally developing a tool that's very existence can loose them a lawsuit. This is kind of besides the point, but at 2:50, you seem to have a misunderstanding of what a LoRA is. In simple terms, LoRA is a way to fine tune a model on a machine that normally wouldn't be capable of training a model that size. If a LoRA works, then a full fine tuning or training would definitely work. In more technical terms, instead of training the weights W, LoRA uses W`=W+AB where A and B are learned vectors instead of the large weights matrix. The end result is a retrained model using a less capable but cheaper training method. Nightshade should protect against a LoRA(or at least one with no counter measures) because a fine tuning is retraining with a high bias for the samples you are tuning for, so nightshade is extra affective against LoRAs because they are weighted extremely strongly compared to the rest of the dataset.
youtube Viral AI Reaction 2025-08-17T01:0… ♥ 1
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[{"id":"ytc_UgwqnLyjqf-Yu60Rd194AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugw4dHmA_xkv9nXDNm54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyQup-tfp7fhUn1XPl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"industry_self","emotion":"indifference"}, {"id":"ytc_UgwYyb19aLgpDLH_o014AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzuddFUjKAoip55HR14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwxM1om6fj05ToubGJ4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxlsDJRgbXIuDYa4NR4AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgyDmx4rU_792W8ji2J4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugz2Nx9C-eIbgoqEuep4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxtP4mSgU8zkYreWk94AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"})