Art cannot be technologically protected from AI

While a lot of effort has gone into protecting art from use in training data (Mist, Glaze1, Nightshade2), it seems like this effort is mostly wasted because similar or less effort can be used to remove that protection. Thus, the only protection possible must be social.

My primary takeaway is that protection of art from image generation is fundamentally based on adding noise to images, and that by adding noise to images, this protection can be stripped away. Visual patterns are not about small details, so modifying small details is more of an annoyance and compute being wasted on both sides rather than useful action.

Artists are necessarily at a disadvantage since they have to act first (i.e., once someone downloads protected art, the protection can no longer be changed). To be effective, protective tools face the challenging task of creating perturbations that transfer to any finetuning technique, even ones chosen adaptively in the future. To illustrate this point, updated versions of Mist (Liang et al., 2023) and Glaze (Shan et al., 2023a) were released after the conclusion of our study, and yet we found these updated versions to be similarly ineffective against our methods. We thus caution that adversarial machine learning techniques will not be able to reliably protect artists from generative style mimicry, and urge the development of alternative measures to protect artists. 3

This statement is slightly undermined by the next line:

We disclosed our results to the affected protection tools prior to publication. In response, Glaze released a new version 2.1 that protects against the specific attacks we describe here. 3

I was also a little concerned by “We make the conservative assumption that all the artist’s images available online are protected.” because I think this is untrue. However, it’s probably irrelevant. While I initially assumed the study’s choice to use historical and contemporary artists would bias their results towards claiming protections are ineffective due to higher representation of historical artists in datasets, they indicated little difference between these groups. The bias introduced by preexisting art seems to have little effect.

Adding and removing noise didn’t noticeably impact perceived quality.3

The protections offered by Glaze, Mist, and Anti-DreamBooth4 are more effective for artists that are not easily mimicked by generative models. While this is obvious, it allows for easy cherry-picking to claim effectiveness of protections, or cherry-picking to argue against their effectiveness (which is why I point it out).

GLEAN5 is a model designed to bypass Glaze that is very successful, at least in its original testing.

As Glaze is a security measure to assist artists, a tool built to ”break” Glaze raises serious ethical questions. We have reached out to the Glaze team regarding GLEAN on multiple occasions, but have received no response. As such, the codebase of GLEAN will not be published until responses from the Glaze team are received.

Glaze is closed source because its authors presume that makes it more secure. 6 It also was developed using copyright infringement violating the GPL license of DiffusionBee. 7


I started this note after seeing a claim that Glaze is not broken because no one has provided evidence on bsky, and then reading the first study I found. I mistakenly searched for a Conclusions section, when they used Main Findings as the heading instead. I then looked at some of the details of how the study was performed and didn’t get a good sense of what was being claimed, so I read the whole thing later and posted my thoughts while reading it, which inspired me to copy them here.

Ironically, during the process of writing this and researching more info, I was blocked by that person, so the post has been censored of its context and hidden from anyone who might meaningfully need to be informed about these issues. 👍


  1. Glaze is meant to prevent text-2-image copying of a style.
  2. Nightshade is designed to poison diffusion models.
  3. Adversarial Perturbations Cannot Reliably Protect Artists From Generative AI (arXiv) by Robert Hönig & Javier Rando & Nicholas Carlini & Florian Tramèr
  4. Anti-DreamBooth was designed to prevent image generations of a subject (rather than to protect an art style).
  5. GLEAN: Generative Learning for Eliminating Adversarial Noise (arXiv) by Justin Lyu Kim & Kyoungwan Woo
  6. Glazing over security (article) by Nicholas Carlini & Florian Tramèr (two authors of Adversarial Perturbations Cannot Reliably Protect Artists From Generative AI)
  7. The Problems With UChicago’s Glaze (article) by Jackson Roberts. (This article claims to be published a year after it was updated, which is a yellow flag for reliability in my experience. I believe the 2022 dates are typoes, and this all occurred in 2023, due to other sources only being in 2023.)

Leave a Reply

Your email address will not be published. Required fields are marked *