Use Procedural Noise to Achieve Backdoor Attack

In recent years, more researchers pay their attention to the security of artificial intelligence. The backdoor attack is one of the threats and has a powerful, stealthy attack ability. There exist a growing trend towards the triggers is that become dynamic and global. In this paper, we propose a nov...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE access 2021, Vol.9, p.127204-127216
Hauptverfasser: Chen, Xuan, Ma, Yuena, Lu, Shiwei
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:In recent years, more researchers pay their attention to the security of artificial intelligence. The backdoor attack is one of the threats and has a powerful, stealthy attack ability. There exist a growing trend towards the triggers is that become dynamic and global. In this paper, we propose a novel global backdoor trigger that is generated by procedural noise. Compared with most triggers, ours are much stealthy and straightforward to implement. In fact, there exist three types of procedural noise, and we evaluate the attack ability of triggers generated by them on the different classification datasets, including CIFAR-10, GTSRB, CelebA, and ImageNet12. The experiment results show that our attack approach can bypass most defense approaches, even for the inspections of humans. We only need poison 5%-10% training data, but the attack success rate(ASR) can reach over 99%. To test the robustness of the backdoor model against the corruption methods that in practice, we introduce 17 corruption methods and compute the accuracy, ASR of the backdoor model with them. The facts show that our backdoor model has strong robustness for most corruption methods, which means it can be applied in reality. Our code is available at https://github.com/928082786/pnoiseattack .
ISSN:2169-3536
2169-3536
DOI:10.1109/ACCESS.2021.3110239