Nearly Zero-Cost Protection Against Mimicry by Personalized Diffusion Models
Recent advancements in diffusion models revolutionize image generation but pose risks of misuse, such as replicating artworks or generating deepfakes. Existing image protection methods, though effective, struggle to balance protection efficacy, invisibility, and latency, thus limiting practical use....
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Recent advancements in diffusion models revolutionize image generation but
pose risks of misuse, such as replicating artworks or generating deepfakes.
Existing image protection methods, though effective, struggle to balance
protection efficacy, invisibility, and latency, thus limiting practical use. We
introduce perturbation pre-training to reduce latency and propose a
mixture-of-perturbations approach that dynamically adapts to input images to
minimize performance degradation. Our novel training strategy computes
protection loss across multiple VAE feature spaces, while adaptive targeted
protection at inference enhances robustness and invisibility. Experiments show
comparable protection performance with improved invisibility and drastically
reduced inference time. The code and demo are available at
\url{https://webtoon.github.io/impasto} |
---|---|
DOI: | 10.48550/arxiv.2412.11423 |