Adversarial Attacks on Deep Learning Based Power Allocation in a Massive MIMO Network
Deep learning (DL) is becoming popular as a new tool for many applications in wireless communication systems. However, for many classification tasks (e.g., modulation classification) it has been shown that DL-based wireless systems are susceptible to adversarial examples; adversarial examples are we...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Deep learning (DL) is becoming popular as a new tool for many applications in
wireless communication systems. However, for many classification tasks (e.g.,
modulation classification) it has been shown that DL-based wireless systems are
susceptible to adversarial examples; adversarial examples are well-crafted
malicious inputs to the neural network (NN) with the objective to cause
erroneous outputs. In this paper, we extend this to regression problems and
show that adversarial attacks can break DL-based power allocation in the
downlink of a massive multiple-input-multiple-output (maMIMO) network.
Specifically, we extend the fast gradient sign method (FGSM), momentum
iterative FGSM, and projected gradient descent adversarial attacks in the
context of power allocation in a maMIMO system. We benchmark the performance of
these attacks and show that with a small perturbation in the input of the NN,
the white-box attacks can result in infeasible solutions up to 86%.
Furthermore, we investigate the performance of black-box attacks. All the
evaluations conducted in this work are based on an open dataset and NN models,
which are publicly available. |
---|---|
DOI: | 10.48550/arxiv.2101.12090 |