Dynamic Efficient Adversarial Training Guided by Gradient Magnitude
Adversarial training is an effective but time-consuming way to train robust deep neural networks that can withstand strong adversarial attacks. As a response to its inefficiency, we propose Dynamic Efficient Adversarial Training (DEAT), which gradually increases the adversarial iteration during trai...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Adversarial training is an effective but time-consuming way to train robust
deep neural networks that can withstand strong adversarial attacks. As a
response to its inefficiency, we propose Dynamic Efficient Adversarial Training
(DEAT), which gradually increases the adversarial iteration during training. We
demonstrate that the gradient's magnitude correlates with the curvature of the
trained model's loss landscape, allowing it to reflect the effect of
adversarial training. Therefore, based on the magnitude of the gradient, we
propose a general acceleration strategy, M+ acceleration, which enables an
automatic and highly effective method of adjusting the training procedure. M+
acceleration is computationally efficient and easy to implement. It is suited
for DEAT and compatible with the majority of existing adversarial training
techniques. Extensive experiments have been done on CIFAR-10 and ImageNet
datasets with various training environments. The results show that the proposed
M+ acceleration significantly improves the training efficiency of existing
adversarial training methods while achieving similar robustness performance.
This demonstrates that the strategy is highly adaptive and offers a valuable
solution for automatic adversarial training. |
---|---|
DOI: | 10.48550/arxiv.2103.03076 |