Adaptive Wiener Filter and Natural Noise to Eliminate Adversarial Perturbation
Deep neural network has been widely used in pattern recognition and speech processing, but its vulnerability to adversarial attacks also proverbially demonstrated. These attacks perform unstructured pixel-wise perturbation to fool the classifier, which does not affect the human visual system. The ro...
Gespeichert in:
Veröffentlicht in: | Electronics (Basel) 2020-10, Vol.9 (10), p.1634 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Deep neural network has been widely used in pattern recognition and speech processing, but its vulnerability to adversarial attacks also proverbially demonstrated. These attacks perform unstructured pixel-wise perturbation to fool the classifier, which does not affect the human visual system. The role of adversarial examples in the information security field has received increased attention across a number of disciplines in recent years. An alternative approach is “like cures like”. In this paper, we propose to utilize common noise and adaptive wiener filtering to mitigate the perturbation. Our method includes two operations: noise addition, which adds natural noise to input adversarial examples, and adaptive wiener filtering, which denoising the images in the previous step. Based on the study of the distribution of attacks, adding natural noise has an impact on adversarial examples to a certain extent and then they can be removed through adaptive wiener filter, which is an optimal estimator for the local variance of the image. The proposed improved adaptive wiener filter can automatically select the optimal window size between the given multiple alternative windows based on the features of different images. Based on lots of experiments, the result demonstrates that the proposed method is capable of defending against adversarial attacks, such as FGSM (Fast Gradient Sign Method), C&W, Deepfool, and JSMA (Jacobian-based Saliency Map Attack). By compared experiments, our method outperforms or is comparable to state-of-the-art methods. |
---|---|
ISSN: | 2079-9292 2079-9292 |
DOI: | 10.3390/electronics9101634 |