You Can't Fool All the Models: Detect Adversarial Samples via Pruning Models

Many adversarial attack methods have investigated the security issue of deep learning models. Previous works on detecting adversarial samples show superior in accuracy but consume too much memory and computing resources. In this paper, we propose an adversarial sample detection method based on prune...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE access 2021, Vol.9, p.163780-163790
Hauptverfasser: Wang, Renxuan, Chen, Zuohui, Dong, Hui, Xuan, Qi
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!