FRNet: A Feature-Rich CNN Architecture to Defend against Adversarial Attacks

Adversarial attacks that are possible in natural images are also transferable to medical images, paralyzing the diagnostic process and threatening the robustness of underlying Convolutional Neural Network (CNN) based classifiers. In this work, we have first demonstrated the effectiveness of well-kno...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE access 2024-01, Vol.12, p.1-1
Hauptverfasser: Neha, A S, Chaturvedi, Vivek, Shafique, Muhammad
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Adversarial attacks that are possible in natural images are also transferable to medical images, paralyzing the diagnostic process and threatening the robustness of underlying Convolutional Neural Network (CNN) based classifiers. In this work, we have first demonstrated the effectiveness of well-known natural image adversarial attacks such as FGSM and PGD on Malaria cell images. Afterwards, we propose a novel defense methodology, namely FRNet, that leverages well-established features such as HOG, LBP, KAZE, and SIFT that are able to detect edges and objects while they remain robust against imperceptible adversarial perturbations. The method utilizes an MLP to efficiently concatenate the features to FRNet making it convenient and resulting in an architecturally neural and attack generic methodology. Our experimental results demonstrate that when applying FRNet on different CNN architectures such as simple CNN, EfficientNet, and MobileNet, it decreases the impact of adversarial attacks by as much as 67% compared to the corresponding base models.
ISSN:2169-3536
2169-3536
DOI:10.1109/ACCESS.2024.3364818