Defense-GAN: Protecting Classifiers Against Adversarial Attacks Using Generative Models
In recent years, deep neural network approaches have been widely adopted for machine learning tasks, including classification. However, they were shown to be vulnerable to adversarial perturbations: carefully crafted small perturbations can cause misclassification of legitimate images. We propose De...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | In recent years, deep neural network approaches have been widely adopted for
machine learning tasks, including classification. However, they were shown to
be vulnerable to adversarial perturbations: carefully crafted small
perturbations can cause misclassification of legitimate images. We propose
Defense-GAN, a new framework leveraging the expressive capability of generative
models to defend deep neural networks against such attacks. Defense-GAN is
trained to model the distribution of unperturbed images. At inference time, it
finds a close output to a given image which does not contain the adversarial
changes. This output is then fed to the classifier. Our proposed method can be
used with any classification model and does not modify the classifier structure
or training procedure. It can also be used as a defense against any attack as
it does not assume knowledge of the process for generating the adversarial
examples. We empirically show that Defense-GAN is consistently effective
against different attack methods and improves on existing defense strategies.
Our code has been made publicly available at
https://github.com/kabkabm/defensegan |
---|---|
DOI: | 10.48550/arxiv.1805.06605 |