Adversarial Robustness of Distilled and Pruned Deep Learning-based Wireless Classifiers
Data-driven deep learning (DL) techniques developed for automatic modulation classification (AMC) of wireless signals are vulnerable to adversarial attacks. This poses a severe security threat to the DL-based wireless systems, specifically for edge applications of AMC. In this work, we address the j...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Data-driven deep learning (DL) techniques developed for automatic modulation
classification (AMC) of wireless signals are vulnerable to adversarial attacks.
This poses a severe security threat to the DL-based wireless systems,
specifically for edge applications of AMC. In this work, we address the joint
problem of developing optimized DL models that are also robust against
adversarial attacks. This enables efficient and reliable deployment of DL-based
AMC on edge devices. We first propose two optimized models using knowledge
distillation and network pruning, followed by a computationally efficient
adversarial training process to improve the robustness. Experimental results on
five white-box attacks show that the proposed optimized and adversarially
trained models can achieve better robustness than the standard (unoptimized)
model. The two optimized models also achieve higher accuracy on clean
(unattacked) samples, which is essential for the reliability of DL-based
solutions at edge applications. |
---|---|
DOI: | 10.48550/arxiv.2404.15344 |