Semi-Implicit Hybrid Gradient Methods with Application to Adversarial Robustness
Adversarial examples, crafted by adding imperceptible perturbations to natural inputs, can easily fool deep neural networks (DNNs). One of the most successful methods for training adversarially robust DNNs is solving a nonconvex-nonconcave minimax problem with an adversarial training (AT) algorithm....
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Adversarial examples, crafted by adding imperceptible perturbations to
natural inputs, can easily fool deep neural networks (DNNs). One of the most
successful methods for training adversarially robust DNNs is solving a
nonconvex-nonconcave minimax problem with an adversarial training (AT)
algorithm. However, among the many AT algorithms, only Dynamic AT (DAT) and You
Only Propagate Once (YOPO) guarantee convergence to a stationary point. In this
work, we generalize the stochastic primal-dual hybrid gradient algorithm to
develop semi-implicit hybrid gradient methods (SI-HGs) for finding stationary
points of nonconvex-nonconcave minimax problems. SI-HGs have the convergence
rate $O(1/K)$, which improves upon the rate $O(1/K^{1/2})$ of DAT and YOPO. We
devise a practical variant of SI-HGs, and show that it outperforms other AT
algorithms in terms of convergence speed and robustness. |
---|---|
DOI: | 10.48550/arxiv.2202.10523 |