Targeted Attack Improves Protection against Unauthorized Diffusion Customization
Diffusion models build a new milestone for image generation yet raising public concerns, for they can be fine-tuned on unauthorized images for customization. Protection based on adversarial attacks rises to encounter this unauthorized diffusion customization, by adding protective watermarks to image...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Diffusion models build a new milestone for image generation yet raising
public concerns, for they can be fine-tuned on unauthorized images for
customization. Protection based on adversarial attacks rises to encounter this
unauthorized diffusion customization, by adding protective watermarks to images
and poisoning diffusion models. However, current protection, leveraging
untargeted attacks, does not appear to be effective enough. In this paper, we
propose a simple yet effective improvement for the protection against
unauthorized diffusion customization by introducing targeted attacks. We show
that by carefully selecting the target, targeted attacks significantly
outperform untargeted attacks in poisoning diffusion models and degrading the
customization image quality. Extensive experiments validate the superiority of
our method on two mainstream customization methods of diffusion models,
compared to existing protections. To explain the surprising success of targeted
attacks, we delve into the mechanism of attack-based protections and propose a
hypothesis based on our observation, which enhances the comprehension of
attack-based protections. To the best of our knowledge, we are the first to
both reveal the vulnerability of diffusion models to targeted attacks and
leverage targeted attacks to enhance protection against unauthorized diffusion
customization. Our code is available on GitHub:
\url{https://github.com/psyker-team/mist-v2}. |
---|---|
DOI: | 10.48550/arxiv.2310.04687 |