Adversarial Attacks on Deep Learning Based Power Allocation in a Massive MIMO Network

Deep learning (DL) is becoming popular as a new tool for many applications in wireless communication systems. However, for many classification tasks (e.g., modulation classification) it has been shown that DL-based wireless systems are susceptible to adversarial examples; adversarial examples are we...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Manoj, B. R, Sadeghi, Meysam, Larsson, Erik G
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Manoj, B. R
Sadeghi, Meysam
Larsson, Erik G
description Deep learning (DL) is becoming popular as a new tool for many applications in wireless communication systems. However, for many classification tasks (e.g., modulation classification) it has been shown that DL-based wireless systems are susceptible to adversarial examples; adversarial examples are well-crafted malicious inputs to the neural network (NN) with the objective to cause erroneous outputs. In this paper, we extend this to regression problems and show that adversarial attacks can break DL-based power allocation in the downlink of a massive multiple-input-multiple-output (maMIMO) network. Specifically, we extend the fast gradient sign method (FGSM), momentum iterative FGSM, and projected gradient descent adversarial attacks in the context of power allocation in a maMIMO system. We benchmark the performance of these attacks and show that with a small perturbation in the input of the NN, the white-box attacks can result in infeasible solutions up to 86%. Furthermore, we investigate the performance of black-box attacks. All the evaluations conducted in this work are based on an open dataset and NN models, which are publicly available.
doi_str_mv 10.48550/arxiv.2101.12090
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2101_12090</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2101_12090</sourcerecordid><originalsourceid>FETCH-LOGICAL-a670-54294cb2b5da015a833e1ca6cd8a2a2bc602529cb992704acbcf4b7a3a6d224a3</originalsourceid><addsrcrecordid>eNotz71OwzAUhmEvDKhwAUycG0iwj-38jKH8VUooQztHx45bWQ1JZUcp3D3QMn3Lq096GLsTPFWF1vyBwpefUxRcpAJ5ya_ZtupmFyIFTz1U00T2EGEc4Mm5I9SOwuCHPTxSdB18jCcXoOr70dLkfyM_AEFDMfrZQbNq1vDuptMYDjfsakd9dLf_u2Cbl-fN8i2p16-rZVUnlOU80QpLZQ0a3REXmgopnbCU2a4gJDQ246ixtKYsMeeKrLE7ZXKSlHWIiuSC3V9uz672GPwnhe_2z9eeffIHPtVKgg</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Adversarial Attacks on Deep Learning Based Power Allocation in a Massive MIMO Network</title><source>arXiv.org</source><creator>Manoj, B. R ; Sadeghi, Meysam ; Larsson, Erik G</creator><creatorcontrib>Manoj, B. R ; Sadeghi, Meysam ; Larsson, Erik G</creatorcontrib><description>Deep learning (DL) is becoming popular as a new tool for many applications in wireless communication systems. However, for many classification tasks (e.g., modulation classification) it has been shown that DL-based wireless systems are susceptible to adversarial examples; adversarial examples are well-crafted malicious inputs to the neural network (NN) with the objective to cause erroneous outputs. In this paper, we extend this to regression problems and show that adversarial attacks can break DL-based power allocation in the downlink of a massive multiple-input-multiple-output (maMIMO) network. Specifically, we extend the fast gradient sign method (FGSM), momentum iterative FGSM, and projected gradient descent adversarial attacks in the context of power allocation in a maMIMO system. We benchmark the performance of these attacks and show that with a small perturbation in the input of the NN, the white-box attacks can result in infeasible solutions up to 86%. Furthermore, we investigate the performance of black-box attacks. All the evaluations conducted in this work are based on an open dataset and NN models, which are publicly available.</description><identifier>DOI: 10.48550/arxiv.2101.12090</identifier><language>eng</language><subject>Computer Science - Information Theory ; Computer Science - Learning ; Mathematics - Information Theory</subject><creationdate>2021-01</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2101.12090$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2101.12090$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Manoj, B. R</creatorcontrib><creatorcontrib>Sadeghi, Meysam</creatorcontrib><creatorcontrib>Larsson, Erik G</creatorcontrib><title>Adversarial Attacks on Deep Learning Based Power Allocation in a Massive MIMO Network</title><description>Deep learning (DL) is becoming popular as a new tool for many applications in wireless communication systems. However, for many classification tasks (e.g., modulation classification) it has been shown that DL-based wireless systems are susceptible to adversarial examples; adversarial examples are well-crafted malicious inputs to the neural network (NN) with the objective to cause erroneous outputs. In this paper, we extend this to regression problems and show that adversarial attacks can break DL-based power allocation in the downlink of a massive multiple-input-multiple-output (maMIMO) network. Specifically, we extend the fast gradient sign method (FGSM), momentum iterative FGSM, and projected gradient descent adversarial attacks in the context of power allocation in a maMIMO system. We benchmark the performance of these attacks and show that with a small perturbation in the input of the NN, the white-box attacks can result in infeasible solutions up to 86%. Furthermore, we investigate the performance of black-box attacks. All the evaluations conducted in this work are based on an open dataset and NN models, which are publicly available.</description><subject>Computer Science - Information Theory</subject><subject>Computer Science - Learning</subject><subject>Mathematics - Information Theory</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotz71OwzAUhmEvDKhwAUycG0iwj-38jKH8VUooQztHx45bWQ1JZUcp3D3QMn3Lq096GLsTPFWF1vyBwpefUxRcpAJ5ya_ZtupmFyIFTz1U00T2EGEc4Mm5I9SOwuCHPTxSdB18jCcXoOr70dLkfyM_AEFDMfrZQbNq1vDuptMYDjfsakd9dLf_u2Cbl-fN8i2p16-rZVUnlOU80QpLZQ0a3REXmgopnbCU2a4gJDQ246ixtKYsMeeKrLE7ZXKSlHWIiuSC3V9uz672GPwnhe_2z9eeffIHPtVKgg</recordid><startdate>20210128</startdate><enddate>20210128</enddate><creator>Manoj, B. R</creator><creator>Sadeghi, Meysam</creator><creator>Larsson, Erik G</creator><scope>AKY</scope><scope>AKZ</scope><scope>GOX</scope></search><sort><creationdate>20210128</creationdate><title>Adversarial Attacks on Deep Learning Based Power Allocation in a Massive MIMO Network</title><author>Manoj, B. R ; Sadeghi, Meysam ; Larsson, Erik G</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a670-54294cb2b5da015a833e1ca6cd8a2a2bc602529cb992704acbcf4b7a3a6d224a3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Computer Science - Information Theory</topic><topic>Computer Science - Learning</topic><topic>Mathematics - Information Theory</topic><toplevel>online_resources</toplevel><creatorcontrib>Manoj, B. R</creatorcontrib><creatorcontrib>Sadeghi, Meysam</creatorcontrib><creatorcontrib>Larsson, Erik G</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv Mathematics</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Manoj, B. R</au><au>Sadeghi, Meysam</au><au>Larsson, Erik G</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Adversarial Attacks on Deep Learning Based Power Allocation in a Massive MIMO Network</atitle><date>2021-01-28</date><risdate>2021</risdate><abstract>Deep learning (DL) is becoming popular as a new tool for many applications in wireless communication systems. However, for many classification tasks (e.g., modulation classification) it has been shown that DL-based wireless systems are susceptible to adversarial examples; adversarial examples are well-crafted malicious inputs to the neural network (NN) with the objective to cause erroneous outputs. In this paper, we extend this to regression problems and show that adversarial attacks can break DL-based power allocation in the downlink of a massive multiple-input-multiple-output (maMIMO) network. Specifically, we extend the fast gradient sign method (FGSM), momentum iterative FGSM, and projected gradient descent adversarial attacks in the context of power allocation in a maMIMO system. We benchmark the performance of these attacks and show that with a small perturbation in the input of the NN, the white-box attacks can result in infeasible solutions up to 86%. Furthermore, we investigate the performance of black-box attacks. All the evaluations conducted in this work are based on an open dataset and NN models, which are publicly available.</abstract><doi>10.48550/arxiv.2101.12090</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2101.12090
ispartof
issn
language eng
recordid cdi_arxiv_primary_2101_12090
source arXiv.org
subjects Computer Science - Information Theory
Computer Science - Learning
Mathematics - Information Theory
title Adversarial Attacks on Deep Learning Based Power Allocation in a Massive MIMO Network
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-27T23%3A46%3A03IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Adversarial%20Attacks%20on%20Deep%20Learning%20Based%20Power%20Allocation%20in%20a%20Massive%20MIMO%20Network&rft.au=Manoj,%20B.%20R&rft.date=2021-01-28&rft_id=info:doi/10.48550/arxiv.2101.12090&rft_dat=%3Carxiv_GOX%3E2101_12090%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true