Enhance the Visual Representation via Discrete Adversarial Training
Adversarial Training (AT), which is commonly accepted as one of the most effective approaches defending against adversarial examples, can largely harm the standard performance, thus has limited usefulness on industrial-scale production and applications. Surprisingly, this phenomenon is totally oppos...
Gespeichert in:
Hauptverfasser: | , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Adversarial Training (AT), which is commonly accepted as one of the most
effective approaches defending against adversarial examples, can largely harm
the standard performance, thus has limited usefulness on industrial-scale
production and applications. Surprisingly, this phenomenon is totally opposite
in Natural Language Processing (NLP) task, where AT can even benefit for
generalization. We notice the merit of AT in NLP tasks could derive from the
discrete and symbolic input space. For borrowing the advantage from NLP-style
AT, we propose Discrete Adversarial Training (DAT). DAT leverages VQGAN to
reform the image data to discrete text-like inputs, i.e. visual words. Then it
minimizes the maximal risk on such discrete images with symbolic adversarial
perturbations. We further give an explanation from the perspective of
distribution to demonstrate the effectiveness of DAT. As a plug-and-play
technique for enhancing the visual representation, DAT achieves significant
improvement on multiple tasks including image classification, object detection
and self-supervised learning. Especially, the model pre-trained with Masked
Auto-Encoding (MAE) and fine-tuned by our DAT without extra data can get 31.40
mCE on ImageNet-C and 32.77% top-1 accuracy on Stylized-ImageNet, building the
new state-of-the-art. The code will be available at
https://github.com/alibaba/easyrobust. |
---|---|
DOI: | 10.48550/arxiv.2209.07735 |