A Comparative Study on Adversarial Noise Generation for Single Image Classification

With the rise of neural network-based classifiers, it is evident that these algorithms are here to stay. Even though various algorithms have been developed, these classifiers still remain vulnerable to misclassification attacks. This article outlines a new noise layer attack based on adversarial lea...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:International journal of intelligent information technologies 2020-01, Vol.16 (1), p.75-87
Hauptverfasser: Saxena, Rishabh, Adate, Amit Sanjay, Sasikumar, Don
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:With the rise of neural network-based classifiers, it is evident that these algorithms are here to stay. Even though various algorithms have been developed, these classifiers still remain vulnerable to misclassification attacks. This article outlines a new noise layer attack based on adversarial learning and compares the proposed method to other such attacking methodologies like Fast Gradient Sign Method, Jacobian-Based Saliency Map Algorithm and DeepFool. This work deals with comparing these algorithms for the use case of single image classification and provides a detailed analysis of how each algorithm compares to each other.
ISSN:1548-3657
1548-3665
DOI:10.4018/IJIIT.2020010105