Is Diffusion Model Safe? Severe Data Leakage via Gradient-Guided Diffusion Model
Gradient leakage has been identified as a potential source of privacy breaches in modern image processing systems, where the adversary can completely reconstruct the training images from leaked gradients. However, existing methods are restricted to reconstructing low-resolution images where data lea...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Gradient leakage has been identified as a potential source of privacy
breaches in modern image processing systems, where the adversary can completely
reconstruct the training images from leaked gradients. However, existing
methods are restricted to reconstructing low-resolution images where data
leakage risks of image processing systems are not sufficiently explored. In
this paper, by exploiting diffusion models, we propose an innovative
gradient-guided fine-tuning method and introduce a new reconstruction attack
that is capable of stealing private, high-resolution images from image
processing systems through leaked gradients where severe data leakage
encounters. Our attack method is easy to implement and requires little prior
knowledge. The experimental results indicate that current reconstruction
attacks can steal images only up to a resolution of $128 \times 128$ pixels,
while our attack method can successfully recover and steal images with
resolutions up to $512 \times 512$ pixels. Our attack method significantly
outperforms the SOTA attack baselines in terms of both pixel-wise accuracy and
time efficiency of image reconstruction. Furthermore, our attack can render
differential privacy ineffective to some extent. |
---|---|
DOI: | 10.48550/arxiv.2406.09484 |