Art cannot be technologically protected from AI

While a lot of effort has gone into protecting art from use in training data (Mist, Glaze1, Nightshade2), it seems like this effort is mostly wasted because similar or less effort can be used to remove that protection. Thus, the only protection possible must be social.

My primary takeaway is that protection of art from image generation is fundamentally based on adding noise to images, and that by adding noise to images, this protection can be stripped away. Visual patterns are not about small details, so modifying small details is more of an annoyance and compute being wasted on both sides rather than useful action.

Artists are necessarily at a disadvantage since they have to act first (i.e., once someone downloads protected art, the protection can no longer be changed). To be effective, protective tools face the challenging task of creating perturbations that transfer to any finetuning technique, even ones chosen adaptively in the future. To illustrate this point, updated versions of Mist (Liang et al., 2023) and Glaze (Shan et al., 2023a) were released after the conclusion of our study, and yet we found these updated versions to be similarly ineffective against our methods. We thus caution that adversarial machine learning techniques will not be able to reliably protect artists from generative style mimicry, and urge the development of alternative measures to protect artists. 3

This statement is slightly undermined by the next line:

We disclosed our results to the affected protection tools prior to publication. In response, Glaze released a new version 2.1 that protects against the specific attacks we describe here. 3

I was also a little concerned by “We make the conservative assumption that all the artist’s images available online are protected.” because I think this is untrue. However, it’s probably irrelevant. While I initially assumed the study’s choice to use historical and contemporary artists would bias their results towards claiming protections are ineffective due to higher representation of historical artists in datasets, they indicated little difference between these groups. The bias introduced by preexisting art seems to have little effect.

Adding and removing noise didn’t noticeably impact perceived quality.3

The protections offered by Glaze, Mist, and Anti-DreamBooth4 are more effective for artists that are not easily mimicked by generative models. While this is obvious, it allows for easy cherry-picking to claim effectiveness of protections, or cherry-picking to argue against their effectiveness (which is why I point it out).

GLEAN5 is a model designed to bypass Glaze that is very successful, at least in its original testing.

As Glaze is a security measure to assist artists, a tool built to ”break” Glaze raises serious ethical questions. We have reached out to the Glaze team regarding GLEAN on multiple occasions, but have received no response. As such, the codebase of GLEAN will not be published until responses from the Glaze team are received.

Glaze is closed source because its authors presume that makes it more secure. 6 It also was developed using copyright infringement violating the GPL license of DiffusionBee. 7


I started this note after seeing a claim that Glaze is not broken because no one has provided evidence on bsky, and then reading the first study I found. I mistakenly searched for a Conclusions section, when they used Main Findings as the heading instead. I then looked at some of the details of how the study was performed and didn’t get a good sense of what was being claimed, so I read the whole thing later and posted my thoughts while reading it, which inspired me to copy them here.

Ironically, during the process of writing this and researching more info, I was blocked by that person, so the post has been censored of its context and hidden from anyone who might meaningfully need to be informed about these issues. 👍


  1. Glaze is meant to prevent text-2-image copying of a style.
  2. Nightshade is designed to poison diffusion models.
  3. Adversarial Perturbations Cannot Reliably Protect Artists From Generative AI (arXiv) by Robert Hönig & Javier Rando & Nicholas Carlini & Florian Tramèr
  4. Anti-DreamBooth was designed to prevent image generations of a subject (rather than to protect an art style).
  5. GLEAN: Generative Learning for Eliminating Adversarial Noise (arXiv) by Justin Lyu Kim & Kyoungwan Woo
  6. Glazing over security (article) by Nicholas Carlini & Florian Tramèr (two authors of Adversarial Perturbations Cannot Reliably Protect Artists From Generative AI)
  7. The Problems With UChicago’s Glaze (article) by Jackson Roberts. (This article claims to be published a year after it was updated, which is a yellow flag for reliability in my experience. I believe the 2022 dates are typoes, and this all occurred in 2023, due to other sources only being in 2023.)

I was asked how I’m doing

I just learned the FBI has stopped just putting trans women on lists and threatening them (and companies associated with healthcare) and has started following them (and invading homes) and is about to open up terrorism cases against activists in order to fabricate the narrative that all trans people are terrorists to back up their threats against all gender affirming healthcare and disappear even more people than have already been disappeared. I knew it was coming, I was just hoping it was coming slow enough that it wouldn’t arrive. But now it seems clear things are going to get a lot worse in the next few years before we have any chance of it getting better.

It feels hopeless to try to do anything, because there’s nothing I can do. And while I’m not a public figure, and thus much lower on the lists, I’m still there. Just because I’m last in line to be targeted doesn’t make it any better.

And unfortunately I was doing some reading and stumbled across some other really blatantly evil stuff being done by the federal government so I was already feeling rather depressed and trying to move on and ignore it because I can’t do anything about it and it is only causing me more harm to stress about it.

It’s also deeply upsetting because it’s the kind of thing most people don’t believe is happening, even if you clearly demonstrate evidence. So there’s not much push back. I’m being demonized just for existing, just like so many other people, and it’s just a small part of a much bigger plan to continue regressing on all human rights issues.

And this is after similar efforts stripped me of access to healthcare and even before that, I was arbitrarily denied most of what I need. So it’s not like I was getting anything I deserve anyhow. But now I’ll also be killed for needing it. Someday.

A year ago, I was talking to my partner about how we probably should leave the country because it’s only a matter of time before we are targeted, and it’ll be too late to leave well before that happens. Shortly after, trans people were banned from leaving the country. It’s already been too late for most of a year.

I dunno if you heard about that. It’s old news at this point, but they just stopped issuing passports for trans people. And they target trans people who already have one to make it more difficult to leave for any reason.

I think the most upsetting part is how it’s cruelty for no benefit. These efforts only extend harm to a broad range of people by targeting a minority. There is not a single shred of benefit for anyone. Not even the rich and powerful, because the economic harm does more to them than any consolidation of power assists them. :/

Ostensibly, the point is that trans people are more likely to advocate for positive change, because just being trans makes someone much more likely to recognize how many systems in place don’t actually benefit the majority. And they want to stop progress. But it doesn’t even really achieve that goal because it just highlights how bad things are.

a nitpick of Acerola’s “Generative AI is not what you think it is”

I was asked why I didn’t think this video is very good, and a lot of my thoughts on it are nitpicks, but since I went ahead and made a list, I figured I might as well share it here. This video is better than the majority on this topic, but my standards are even higher.

  • Kind of a nitpick, kind of not? The distinction between theft and piracy is important, because companies historically try to equivocate them in order to levy heavier penalties on pirates by making their losses appear greater than they are. I really don’t like that everyone is calling mass piracy mass theft because piracy really shouldn’t be treated as theft.
  • Nitpick: I wish they’d given more attention to the modern slavery behind datasets because the emotional torture of people in third-world countries is the most obviously unethical and directly damaging thing these companies do. It’s a good emotional hook and can get people to pay more attention to the bad being done.
  • Nitpick: Models being confidently incorrect is a problem that is being improved, but it’s portrayed as a fundamental issue.
  • The whole bit about machine learning not being accessible is just wrong. People have been making models on laptops for at least a decade. These are usually more demonstrations of concepts and toys, but can be useful. Just because a person can’t scale to the internet as a dataset doesn’t mean they can’t effectively use machine learning for tasks.
  • They’re completely wrong about models being at a dead-end because all internet data has been scraped. Synthetic data generation doesn’t cause model collapse and does improve models. The example given – piss-tint on images – is also completely misattributed to data incest when it’s actually due to yellowing in old photographs.
  • Electricity and water use are consistently overstated. The projected water use in 2027 for global AI usage is 0.17% of 2014’s global water use. Agriculture’s water waste is a much bigger problem – and I do mean waste, not usage. Agriculture must use the majority of water, it simply has to because growing anything takes a lot of water, but the amount it uses is far in excess of how much it has to. Electricity usage I’m far less knowledgable about, but looking into. I know a big problem with electricity is in how costs are distributed, and how computers use electricity causes a unique strain problem.
  • Nitpick: I don’t like that they consistently say models aren’t improving, because they are. Benchmarks are kind of bullshit in my opinion, but I’ve seen increases in quality despite a lot of mistakes and steps backwards.
  • Nitpick: I think they’re wrong about how long these companies can burn money. In order to remain competitive with how speculative all of this is, they are spending cash exponentially faster, so the insane cash reserves they’ve built won’t last that long. There’s also the fact that we’re in a recession, and only AI is making the economy appear healthy. I don’t think an economy can actually survive long-term when all of it is failing except one small sector that has massively inflated value.
  • Calling anyone how worked on machine learning to be tried for crimes against humanity is just fucked up. There’s a massive difference between what the few big companies are doing and all of the research that’s been done. Blaming everyone for the actions of a greedy few is wrong.
  • They didn’t mention a single model trained only on data with explicit permission. Feels misleading to ignore that there are exceptions to the general rule of mass piracy.
  • They didn’t mention anything about how machine learning has been used to discover new drugs or better understand biology. Despite many failures and the extreme infection of greed in the medical industry, that sector is where there is the most benefit from machine learning.

I guess that last point is kind of a nitpick as well, because it’s so far outside of the point of the video being about image and text generation, but when they started by defining how “AI” just means machine learning right now, it feels wrong to exclude the positive aspects entirely. If we demonize ALL machine learning, then we lose the benefits where they do exist.

I should also say that I know far more about failures of AI in medicine than successes, and I want to have more solid examples to point to when saying that AI has been useful in medicine. I recently made a video about that, and posted about it here.

Updated 2026-04-02 to include Puggietaur’s video about nitpicking. Since I claim to be nitpicking several times here, as well as in the title, I figured it’s worth pointing out that nitpicks shouldn’t always be easily dismissed.