Improving the Accuracy-Robustness Trade-Off of Classifiers via Adaptive Smoothing
While prior research has proposed a plethora of methods that build neural classifiers robust against adversarial robustness, practitioners are still reluctant to adopt them due to their unacceptably severe clean accuracy penalties. This paper significantly alleviates this accuracy-robustness trade-o...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | While prior research has proposed a plethora of methods that build neural
classifiers robust against adversarial robustness, practitioners are still
reluctant to adopt them due to their unacceptably severe clean accuracy
penalties. This paper significantly alleviates this accuracy-robustness
trade-off by mixing the output probabilities of a standard classifier and a
robust classifier, where the standard network is optimized for clean accuracy
and is not robust in general. We show that the robust base classifier's
confidence difference for correct and incorrect examples is the key to this
improvement. In addition to providing intuitions and empirical evidence, we
theoretically certify the robustness of the mixed classifier under realistic
assumptions. Furthermore, we adapt an adversarial input detector into a mixing
network that adaptively adjusts the mixture of the two base models, further
reducing the accuracy penalty of achieving robustness. The proposed flexible
method, termed "adaptive smoothing", can work in conjunction with existing or
even future methods that improve clean accuracy, robustness, or adversary
detection. Our empirical evaluation considers strong attack methods, including
AutoAttack and adaptive attack. On the CIFAR-100 dataset, our method achieves
an 85.21% clean accuracy while maintaining a 38.72% $\ell_\infty$-AutoAttacked
($\epsilon = 8/255$) accuracy, becoming the second most robust method on the
RobustBench CIFAR-100 benchmark as of submission, while improving the clean
accuracy by ten percentage points compared with all listed models. The code
that implements our method is available at
https://github.com/Bai-YT/AdaptiveSmoothing. |
---|---|
DOI: | 10.48550/arxiv.2301.12554 |