The Devil is in the GAN: Backdoor Attacks and Defenses in Deep Generative Models
Deep Generative Models (DGMs) are a popular class of deep learning models which find widespread use because of their ability to synthesize data from complex, high-dimensional manifolds. However, even with their increasing industrial adoption, they haven't been subject to rigorous security and p...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Deep Generative Models (DGMs) are a popular class of deep learning models
which find widespread use because of their ability to synthesize data from
complex, high-dimensional manifolds. However, even with their increasing
industrial adoption, they haven't been subject to rigorous security and privacy
analysis. In this work we examine one such aspect, namely backdoor attacks on
DGMs which can significantly limit the applicability of pre-trained models
within a model supply chain and at the very least cause massive reputation
damage for companies outsourcing DGMs form third parties.
While similar attacks scenarios have been studied in the context of classical
prediction models, their manifestation in DGMs hasn't received the same
attention. To this end we propose novel training-time attacks which result in
corrupted DGMs that synthesize regular data under normal operations and
designated target outputs for inputs sampled from a trigger distribution. These
attacks are based on an adversarial loss function that combines the dual
objectives of attack stealth and fidelity. We systematically analyze these
attacks, and show their effectiveness for a variety of approaches like
Generative Adversarial Networks (GANs) and Variational Autoencoders (VAEs), as
well as different data domains including images and audio. Our experiments show
that - even for large-scale industry-grade DGMs (like StyleGAN) - our attacks
can be mounted with only modest computational effort. We also motivate suitable
defenses based on static/dynamic model and output inspections, demonstrate
their usefulness, and prescribe a practical and comprehensive defense strategy
that paves the way for safe usage of DGMs. |
---|---|
DOI: | 10.48550/arxiv.2108.01644 |