Smoothed Inference for Adversarially-Trained Models
Deep neural networks are known to be vulnerable to adversarial attacks. Current methods of defense from such attacks are based on either implicit or explicit regularization, e.g., adversarial training. Randomized smoothing, the averaging of the classifier outputs over a random distribution centered...
Gespeichert in:
Hauptverfasser: | , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Deep neural networks are known to be vulnerable to adversarial attacks.
Current methods of defense from such attacks are based on either implicit or
explicit regularization, e.g., adversarial training. Randomized smoothing, the
averaging of the classifier outputs over a random distribution centered in the
sample, has been shown to guarantee the performance of a classifier subject to
bounded perturbations of the input. In this work, we study the application of
randomized smoothing as a way to improve performance on unperturbed data as
well as to increase robustness to adversarial attacks. The proposed technique
can be applied on top of any existing adversarial defense, but works
particularly well with the randomized approaches. We examine its performance on
common white-box (PGD) and black-box (transfer and NAttack) attacks on CIFAR-10
and CIFAR-100, substantially outperforming previous art for most scenarios and
comparable on others. For example, we achieve 60.4% accuracy under a PGD attack
on CIFAR-10 using ResNet-20, outperforming previous art by 11.7%. Since our
method is based on sampling, it lends itself well for trading-off between the
model inference complexity and its performance. A reference implementation of
the proposed techniques is provided at https://github.com/yanemcovsky/SIAM |
---|---|
DOI: | 10.48550/arxiv.1911.07198 |