StAIn: Stealthy Avenues of Attacks on Horizontally Collaborated Convolutional Neural Network Inference and their Mitigation

With significant potential improvement in device-to-device (D2D) communication due to improved wireless link capacity (e.g., 5G and NextG systems), a collaboration of multiple edge devices (called horizontal collaboration (HC)) is becoming a reality for real-time Edge Intelligence (EI). The distribu...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE access 2023-01, Vol.11, p.1-1
Hauptverfasser: Adeyemo, Adewale, Sanderson, Jonathan, Odetola, Tolulope, Khalid, Faiq, Hasan, Syed Rafay
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:With significant potential improvement in device-to-device (D2D) communication due to improved wireless link capacity (e.g., 5G and NextG systems), a collaboration of multiple edge devices (called horizontal collaboration (HC)) is becoming a reality for real-time Edge Intelligence (EI). The distributed nature of HC offers an advantage against traditional adversarial attacks because the adversary does not have access to the entire deep learning architecture (DLA). Due to the involvement of multiple untrusted edge devices in HC environment, the possibility of malicious devices cannot be eliminated. In this paper, we unearth some attacks that are very effective and stealthy even when the attacker has minimal knowledge of the DLA as is the case in HC-based DLA. We are also providing novel filtering methods to mitigate such attacks. Our novel attacks leverage local information available on output feature maps (FMs) of a targeted edge device to modify the regular adversarial attacks (e.g. Fast Gradient Signed Method (FGSM) and Jacobian-based Saliency Map Attack (JSMA)). Similarly, a customized convolutional neural network (CNN) based filter is empirically designed, developed, and tested. Four different CNN models (LeNet, CapsuleNet, MiniVGGNet, and VGG16) are used to validate the proposed attacks and defense methodologies. Our three attacks on four different CNN models (with two variations of each attack) show a substantial accuracy drop of 62% on average. The proposed filtering approach is able to mitigate the attack by recovering the actual accuracy back to 75.1% on average. To the best of our knowledge, this is the first work that investigates the security vulnerability of DLA in the HC environment, and all three of our attacks are scalable and agnostic to the partition location within the DLA.
ISSN:2169-3536
2169-3536
DOI:10.1109/ACCESS.2023.3241096