Intelligent reflecting surface aided covert wireless communication exploiting deep reinforcement learning

Wireless communication system are facing more and more security threats, and protecting user privacy becomes important. The goal of covert communication is to hide the existence of legitimate transmission as a practical approach. Inspired by the great success of deep reinforcement learning (DRL) on...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Wireless networks 2023-02, Vol.29 (2), p.877-889
Hauptverfasser: Hu, Langtao, Bi, Songjiao, Liu, Quanjin, Jiang, Yu’e, Chen, Chunsheng
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 889
container_issue 2
container_start_page 877
container_title Wireless networks
container_volume 29
creator Hu, Langtao
Bi, Songjiao
Liu, Quanjin
Jiang, Yu’e
Chen, Chunsheng
description Wireless communication system are facing more and more security threats, and protecting user privacy becomes important. The goal of covert communication is to hide the existence of legitimate transmission as a practical approach. Inspired by the great success of deep reinforcement learning (DRL) on handling challenging optimization problems, DRL is used to optimize covert communication performance. To achieve this, a model-free and off-policy deep deterministic policy gradient (DDPG) algorithm is proposed to maximize the covert rate under covert constraint. The transmit beamformer vector of legitimate transmitter and the phase shifts matrix of intelligent reflecting surfaces (IRS) are the outputs of DRL neural networks. The DRL can learn from the environment and adjust transmit beamformer vector and phase shifts matrix to maximize covert communication performance. Simulation results demonstrate that the proposed DDPG algorithm can achieve comparable performance with two benchmarks algorithm.
doi_str_mv 10.1007/s11276-022-03037-2
format Article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2768941492</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2768941492</sourcerecordid><originalsourceid>FETCH-LOGICAL-c319t-418c285ac87a769812da55eba7e8a715c7fd09124b531f6728b8fea06ea70c883</originalsourceid><addsrcrecordid>eNp9kMtOwzAQRS0EEqXwA6wisQ74kcT2ElU8KlViA2vLdcaVq8QJtsPj73EbJHasZkZz7x3NQeia4FuCMb-LhFDelJjSEjPMeElP0ILUnJaCyOY098cVZuIcXcS4xxgLJuUCubVP0HVuBz4VAWwHJjm_K-IUrDZQaNdCW5jhA0IqPl2ADmLMc99P3hmd3OAL-Bq7wR1tLcCYY5y3QzDQH0I70MHn3SU6s7qLcPVbl-jt8eF19VxuXp7Wq_tNaRiRqayIMFTU2giueSMFoa2ua9hqDkJzUhtuWywJrbY1I7bhVGyFBY0b0BwbIdgS3cy5YxjeJ4hJ7Ycp-HxSZURCVqSSNKvorDJhiDE_rsbgeh2-FcHqgFTNSFXmpo5I1cHEZlPMYr-D8Bf9j-sHgC17tA</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2768941492</pqid></control><display><type>article</type><title>Intelligent reflecting surface aided covert wireless communication exploiting deep reinforcement learning</title><source>Springer Nature - Complete Springer Journals</source><creator>Hu, Langtao ; Bi, Songjiao ; Liu, Quanjin ; Jiang, Yu’e ; Chen, Chunsheng</creator><creatorcontrib>Hu, Langtao ; Bi, Songjiao ; Liu, Quanjin ; Jiang, Yu’e ; Chen, Chunsheng</creatorcontrib><description>Wireless communication system are facing more and more security threats, and protecting user privacy becomes important. The goal of covert communication is to hide the existence of legitimate transmission as a practical approach. Inspired by the great success of deep reinforcement learning (DRL) on handling challenging optimization problems, DRL is used to optimize covert communication performance. To achieve this, a model-free and off-policy deep deterministic policy gradient (DDPG) algorithm is proposed to maximize the covert rate under covert constraint. The transmit beamformer vector of legitimate transmitter and the phase shifts matrix of intelligent reflecting surfaces (IRS) are the outputs of DRL neural networks. The DRL can learn from the environment and adjust transmit beamformer vector and phase shifts matrix to maximize covert communication performance. Simulation results demonstrate that the proposed DDPG algorithm can achieve comparable performance with two benchmarks algorithm.</description><identifier>ISSN: 1022-0038</identifier><identifier>EISSN: 1572-8196</identifier><identifier>DOI: 10.1007/s11276-022-03037-2</identifier><language>eng</language><publisher>New York: Springer US</publisher><subject>Algorithms ; Artificial intelligence ; Beamforming ; Communications Engineering ; Communications systems ; Computer Communication Networks ; Convex analysis ; Deep learning ; Design ; Electrical Engineering ; Engineering ; IT in Business ; Machine learning ; Mathematical analysis ; Network management systems ; Networks ; Neural networks ; Optimization ; Optimization techniques ; Original Paper ; Privacy ; Reconfigurable intelligent surfaces ; Transmitters ; Wireless communication systems ; Wireless communications ; Wireless networks</subject><ispartof>Wireless networks, 2023-02, Vol.29 (2), p.877-889</ispartof><rights>The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2022. Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c319t-418c285ac87a769812da55eba7e8a715c7fd09124b531f6728b8fea06ea70c883</citedby><cites>FETCH-LOGICAL-c319t-418c285ac87a769812da55eba7e8a715c7fd09124b531f6728b8fea06ea70c883</cites><orcidid>0000-0003-1704-2736</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://link.springer.com/content/pdf/10.1007/s11276-022-03037-2$$EPDF$$P50$$Gspringer$$H</linktopdf><linktohtml>$$Uhttps://link.springer.com/10.1007/s11276-022-03037-2$$EHTML$$P50$$Gspringer$$H</linktohtml><link.rule.ids>314,776,780,27903,27904,41467,42536,51298</link.rule.ids></links><search><creatorcontrib>Hu, Langtao</creatorcontrib><creatorcontrib>Bi, Songjiao</creatorcontrib><creatorcontrib>Liu, Quanjin</creatorcontrib><creatorcontrib>Jiang, Yu’e</creatorcontrib><creatorcontrib>Chen, Chunsheng</creatorcontrib><title>Intelligent reflecting surface aided covert wireless communication exploiting deep reinforcement learning</title><title>Wireless networks</title><addtitle>Wireless Netw</addtitle><description>Wireless communication system are facing more and more security threats, and protecting user privacy becomes important. The goal of covert communication is to hide the existence of legitimate transmission as a practical approach. Inspired by the great success of deep reinforcement learning (DRL) on handling challenging optimization problems, DRL is used to optimize covert communication performance. To achieve this, a model-free and off-policy deep deterministic policy gradient (DDPG) algorithm is proposed to maximize the covert rate under covert constraint. The transmit beamformer vector of legitimate transmitter and the phase shifts matrix of intelligent reflecting surfaces (IRS) are the outputs of DRL neural networks. The DRL can learn from the environment and adjust transmit beamformer vector and phase shifts matrix to maximize covert communication performance. Simulation results demonstrate that the proposed DDPG algorithm can achieve comparable performance with two benchmarks algorithm.</description><subject>Algorithms</subject><subject>Artificial intelligence</subject><subject>Beamforming</subject><subject>Communications Engineering</subject><subject>Communications systems</subject><subject>Computer Communication Networks</subject><subject>Convex analysis</subject><subject>Deep learning</subject><subject>Design</subject><subject>Electrical Engineering</subject><subject>Engineering</subject><subject>IT in Business</subject><subject>Machine learning</subject><subject>Mathematical analysis</subject><subject>Network management systems</subject><subject>Networks</subject><subject>Neural networks</subject><subject>Optimization</subject><subject>Optimization techniques</subject><subject>Original Paper</subject><subject>Privacy</subject><subject>Reconfigurable intelligent surfaces</subject><subject>Transmitters</subject><subject>Wireless communication systems</subject><subject>Wireless communications</subject><subject>Wireless networks</subject><issn>1022-0038</issn><issn>1572-8196</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><sourceid>GNUQQ</sourceid><recordid>eNp9kMtOwzAQRS0EEqXwA6wisQ74kcT2ElU8KlViA2vLdcaVq8QJtsPj73EbJHasZkZz7x3NQeia4FuCMb-LhFDelJjSEjPMeElP0ILUnJaCyOY098cVZuIcXcS4xxgLJuUCubVP0HVuBz4VAWwHJjm_K-IUrDZQaNdCW5jhA0IqPl2ADmLMc99P3hmd3OAL-Bq7wR1tLcCYY5y3QzDQH0I70MHn3SU6s7qLcPVbl-jt8eF19VxuXp7Wq_tNaRiRqayIMFTU2giueSMFoa2ua9hqDkJzUhtuWywJrbY1I7bhVGyFBY0b0BwbIdgS3cy5YxjeJ4hJ7Ycp-HxSZURCVqSSNKvorDJhiDE_rsbgeh2-FcHqgFTNSFXmpo5I1cHEZlPMYr-D8Bf9j-sHgC17tA</recordid><startdate>20230201</startdate><enddate>20230201</enddate><creator>Hu, Langtao</creator><creator>Bi, Songjiao</creator><creator>Liu, Quanjin</creator><creator>Jiang, Yu’e</creator><creator>Chen, Chunsheng</creator><general>Springer US</general><general>Springer Nature B.V</general><scope>AAYXX</scope><scope>CITATION</scope><scope>3V.</scope><scope>7SC</scope><scope>7SP</scope><scope>7WY</scope><scope>7WZ</scope><scope>7XB</scope><scope>87Z</scope><scope>88I</scope><scope>8AO</scope><scope>8FD</scope><scope>8FE</scope><scope>8FG</scope><scope>8FK</scope><scope>8FL</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BEZIV</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>FRNLG</scope><scope>F~G</scope><scope>GNUQQ</scope><scope>HCIFZ</scope><scope>JQ2</scope><scope>K60</scope><scope>K6~</scope><scope>L.-</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>M0C</scope><scope>M2P</scope><scope>P5Z</scope><scope>P62</scope><scope>PQBIZ</scope><scope>PQBZA</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>Q9U</scope><orcidid>https://orcid.org/0000-0003-1704-2736</orcidid></search><sort><creationdate>20230201</creationdate><title>Intelligent reflecting surface aided covert wireless communication exploiting deep reinforcement learning</title><author>Hu, Langtao ; Bi, Songjiao ; Liu, Quanjin ; Jiang, Yu’e ; Chen, Chunsheng</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c319t-418c285ac87a769812da55eba7e8a715c7fd09124b531f6728b8fea06ea70c883</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Algorithms</topic><topic>Artificial intelligence</topic><topic>Beamforming</topic><topic>Communications Engineering</topic><topic>Communications systems</topic><topic>Computer Communication Networks</topic><topic>Convex analysis</topic><topic>Deep learning</topic><topic>Design</topic><topic>Electrical Engineering</topic><topic>Engineering</topic><topic>IT in Business</topic><topic>Machine learning</topic><topic>Mathematical analysis</topic><topic>Network management systems</topic><topic>Networks</topic><topic>Neural networks</topic><topic>Optimization</topic><topic>Optimization techniques</topic><topic>Original Paper</topic><topic>Privacy</topic><topic>Reconfigurable intelligent surfaces</topic><topic>Transmitters</topic><topic>Wireless communication systems</topic><topic>Wireless communications</topic><topic>Wireless networks</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Hu, Langtao</creatorcontrib><creatorcontrib>Bi, Songjiao</creatorcontrib><creatorcontrib>Liu, Quanjin</creatorcontrib><creatorcontrib>Jiang, Yu’e</creatorcontrib><creatorcontrib>Chen, Chunsheng</creatorcontrib><collection>CrossRef</collection><collection>ProQuest Central (Corporate)</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>ABI/INFORM Collection</collection><collection>ABI/INFORM Global (PDF only)</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>ABI/INFORM Global (Alumni Edition)</collection><collection>Science Database (Alumni Edition)</collection><collection>ProQuest Pharma Collection</collection><collection>Technology Research Database</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Central (Alumni) (purchase pre-March 2016)</collection><collection>ABI/INFORM Collection (Alumni Edition)</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>Advanced Technologies &amp; Aerospace Collection</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Business Premium Collection</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>Business Premium Collection (Alumni)</collection><collection>ABI/INFORM Global (Corporate)</collection><collection>ProQuest Central Student</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Computer Science Collection</collection><collection>ProQuest Business Collection (Alumni Edition)</collection><collection>ProQuest Business Collection</collection><collection>ABI/INFORM Professional Advanced</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>ABI/INFORM Global</collection><collection>Science Database</collection><collection>Advanced Technologies &amp; Aerospace Database</collection><collection>ProQuest Advanced Technologies &amp; Aerospace Collection</collection><collection>ProQuest One Business</collection><collection>ProQuest One Business (Alumni)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>ProQuest Central Basic</collection><jtitle>Wireless networks</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Hu, Langtao</au><au>Bi, Songjiao</au><au>Liu, Quanjin</au><au>Jiang, Yu’e</au><au>Chen, Chunsheng</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Intelligent reflecting surface aided covert wireless communication exploiting deep reinforcement learning</atitle><jtitle>Wireless networks</jtitle><stitle>Wireless Netw</stitle><date>2023-02-01</date><risdate>2023</risdate><volume>29</volume><issue>2</issue><spage>877</spage><epage>889</epage><pages>877-889</pages><issn>1022-0038</issn><eissn>1572-8196</eissn><abstract>Wireless communication system are facing more and more security threats, and protecting user privacy becomes important. The goal of covert communication is to hide the existence of legitimate transmission as a practical approach. Inspired by the great success of deep reinforcement learning (DRL) on handling challenging optimization problems, DRL is used to optimize covert communication performance. To achieve this, a model-free and off-policy deep deterministic policy gradient (DDPG) algorithm is proposed to maximize the covert rate under covert constraint. The transmit beamformer vector of legitimate transmitter and the phase shifts matrix of intelligent reflecting surfaces (IRS) are the outputs of DRL neural networks. The DRL can learn from the environment and adjust transmit beamformer vector and phase shifts matrix to maximize covert communication performance. Simulation results demonstrate that the proposed DDPG algorithm can achieve comparable performance with two benchmarks algorithm.</abstract><cop>New York</cop><pub>Springer US</pub><doi>10.1007/s11276-022-03037-2</doi><tpages>13</tpages><orcidid>https://orcid.org/0000-0003-1704-2736</orcidid></addata></record>
fulltext fulltext
identifier ISSN: 1022-0038
ispartof Wireless networks, 2023-02, Vol.29 (2), p.877-889
issn 1022-0038
1572-8196
language eng
recordid cdi_proquest_journals_2768941492
source Springer Nature - Complete Springer Journals
subjects Algorithms
Artificial intelligence
Beamforming
Communications Engineering
Communications systems
Computer Communication Networks
Convex analysis
Deep learning
Design
Electrical Engineering
Engineering
IT in Business
Machine learning
Mathematical analysis
Network management systems
Networks
Neural networks
Optimization
Optimization techniques
Original Paper
Privacy
Reconfigurable intelligent surfaces
Transmitters
Wireless communication systems
Wireless communications
Wireless networks
title Intelligent reflecting surface aided covert wireless communication exploiting deep reinforcement learning
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-22T13%3A16%3A46IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Intelligent%20reflecting%20surface%20aided%20covert%20wireless%20communication%20exploiting%20deep%20reinforcement%20learning&rft.jtitle=Wireless%20networks&rft.au=Hu,%20Langtao&rft.date=2023-02-01&rft.volume=29&rft.issue=2&rft.spage=877&rft.epage=889&rft.pages=877-889&rft.issn=1022-0038&rft.eissn=1572-8196&rft_id=info:doi/10.1007/s11276-022-03037-2&rft_dat=%3Cproquest_cross%3E2768941492%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2768941492&rft_id=info:pmid/&rfr_iscdi=true