Ensemble Methods as a Defense to Adversarial Perturbations Against Deep Neural Networks
Deep learning has become the state of the art approach in many machine learning problems such as classification. It has recently been shown that deep learning is highly vulnerable to adversarial perturbations. Taking the camera systems of self-driving cars as an example, small adversarial perturbati...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Deep learning has become the state of the art approach in many machine
learning problems such as classification. It has recently been shown that deep
learning is highly vulnerable to adversarial perturbations. Taking the camera
systems of self-driving cars as an example, small adversarial perturbations can
cause the system to make errors in important tasks, such as classifying traffic
signs or detecting pedestrians. Hence, in order to use deep learning without
safety concerns a proper defense strategy is required. We propose to use
ensemble methods as a defense strategy against adversarial perturbations. We
find that an attack leading one model to misclassify does not imply the same
for other networks performing the same task. This makes ensemble methods an
attractive defense strategy against adversarial attacks. We empirically show
for the MNIST and the CIFAR-10 data sets that ensemble methods not only improve
the accuracy of neural networks on test data but also increase their robustness
against adversarial perturbations. |
---|---|
DOI: | 10.48550/arxiv.1709.03423 |