Boundary augment: A data augment method to defend poison attack

In recent years, Deep Neural Networks(DNNs) have been applied in many fields such as computer vision and natural language processing. Many third‐party cloud training platforms have been built to facilitate many individual users or small enterprises for training their models, for example, Colab(googl...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IET image processing 2021-11, Vol.15 (13), p.3292-3303
Hauptverfasser: Chen, Xuan, Ma, YueNa, Lu, ShiWei, Yao, Yu
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:In recent years, Deep Neural Networks(DNNs) have been applied in many fields such as computer vision and natural language processing. Many third‐party cloud training platforms have been built to facilitate many individual users or small enterprises for training their models, for example, Colab(google) or AWS cloud platform. For these cloud platforms, there exist many potentially fatal risks, including poison attacks. At the same time, as for federated learning, poison attack is also a severe threat to which. In this paper, a novel method to defend against poison attacks by estimating the distribution of poison data and retraining the backdoor model with a few training data is introduced. The estimated distribution under the manifold DeepFool algorithm fits the poison data well, which can be used to search the manifold boundary of the poisoned data and the clean. Unlike empirical defense methods, the authors' approach is attack‐agnostic, which means that the approach is robust for the various attack methods. Also, it is proven that the adversarial training approach is a practical approach to defend against the poison attack. The authors' approach is tested on the datasets MNIST, CIFAR‐10, GTSRB and ImageNet. The accuracy of the retrained model decreases slightly, but the ASR drops drastically, which proves that our approach has a powerful generalization to defend against the most poison attacks.
ISSN:1751-9659
1751-9667
DOI:10.1049/ipr2.12325