Fast Adaptive Jamming Resource Allocation Against Frequency-Hopping Spread Spectrum in Wireless Sensor Networks via Meta-Deep-Reinforcement-Learning

Partial-band noise jamming is an important countermeasure against frequency-hopping spread spectrum technology in wireless sensor networks, and the jamming resource allocation (JRA) problem involved is a high-dimensional combinatorial optimization and also an NP-hard problem. Moreover, the users can...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on aerospace and electronic systems 2024-12, Vol.60 (6), p.7676-7693
Hauptverfasser: Rao, Ning, Xu, Hua, Qi, Zisen, Wang, Dan, Zhang, Yue
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 7693
container_issue 6
container_start_page 7676
container_title IEEE transactions on aerospace and electronic systems
container_volume 60
creator Rao, Ning
Xu, Hua
Qi, Zisen
Wang, Dan
Zhang, Yue
description Partial-band noise jamming is an important countermeasure against frequency-hopping spread spectrum technology in wireless sensor networks, and the jamming resource allocation (JRA) problem involved is a high-dimensional combinatorial optimization and also an NP-hard problem. Moreover, the users can dynamically alter their communication status, such as changing the channel spectrum distributions of hopping sets, posing additional difficulties for JRA. This article develops two methods to address the aforementioned challenges. First, a deep reinforcement learning (DRL)-based method is proposed for efficient JRA optimization, with the jamming scheme of each jamming node decided sequentially by the policy neural network, and its parameters are updated through trust region policy optimization (TRPO) within a trust region to ensure stable and fast convergence. Second, we propose a meta-TRPO-based method to improve the generalization capability of the policy network. After the meta-training process, it can update the meta-policy network with just a few fine-tuning steps and quickly obtain a task-specific policy for the fresh task. Extensive simulation results show that the proposed DRL-based method converges faster than other DRL methods. In addition, the proposed meta-TRPO-based method can rapidly adapt to unseen jamming tasks with only a small quantity of training trajectories.
doi_str_mv 10.1109/TAES.2024.3418944
format Article
fullrecord <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_crossref_primary_10_1109_TAES_2024_3418944</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>10571360</ieee_id><sourcerecordid>3141611766</sourcerecordid><originalsourceid>FETCH-LOGICAL-c176t-a239f0544451f5186659e146038b8b274a1a234955089c53d26cb81155b056a23</originalsourceid><addsrcrecordid>eNpNkE1OwzAQhS0EEqVwACQWlli7eBI7TZYRUH5UQGqLWEZuOqlcGjvYblHvwYFxVRasRqP5Zt68R8gl8AEAL25m5f10kPBEDFIBeSHEEemBlENWZDw9Jj3OIWdFIuGUnHm_iq3IRdojPyPlAy0Xqgt6i_RZta02SzpBbzeuRlqu17ZWQVtDy6XSJsIjh18bNPWOPdqu29PTzqFaxIJ1cJuWakM_tMM1ek-naLx19BXDt3Wfnm61oi8YFLtD7NgEtWlsFGrRBDZG5Uw8eE5OGrX2ePFX--R9dD-7fWTjt4en23LMahhmgakkLRouhRASGgl5lskCQUTD-TyfJ0OhICKikJLnRS3TRZLV8xxiLHMuszjqk-vD3c7ZaMmHahVdmyhZpSAggyiTRQoOVO2s9w6bqnO6VW5XAa_24Vf78Kt9-NVf-HHn6rCjEfEfL4eQxv9-AU8wgRA</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3141611766</pqid></control><display><type>article</type><title>Fast Adaptive Jamming Resource Allocation Against Frequency-Hopping Spread Spectrum in Wireless Sensor Networks via Meta-Deep-Reinforcement-Learning</title><source>IEEE Electronic Library (IEL)</source><creator>Rao, Ning ; Xu, Hua ; Qi, Zisen ; Wang, Dan ; Zhang, Yue</creator><creatorcontrib>Rao, Ning ; Xu, Hua ; Qi, Zisen ; Wang, Dan ; Zhang, Yue</creatorcontrib><description>Partial-band noise jamming is an important countermeasure against frequency-hopping spread spectrum technology in wireless sensor networks, and the jamming resource allocation (JRA) problem involved is a high-dimensional combinatorial optimization and also an NP-hard problem. Moreover, the users can dynamically alter their communication status, such as changing the channel spectrum distributions of hopping sets, posing additional difficulties for JRA. This article develops two methods to address the aforementioned challenges. First, a deep reinforcement learning (DRL)-based method is proposed for efficient JRA optimization, with the jamming scheme of each jamming node decided sequentially by the policy neural network, and its parameters are updated through trust region policy optimization (TRPO) within a trust region to ensure stable and fast convergence. Second, we propose a meta-TRPO-based method to improve the generalization capability of the policy network. After the meta-training process, it can update the meta-policy network with just a few fine-tuning steps and quickly obtain a task-specific policy for the fresh task. Extensive simulation results show that the proposed DRL-based method converges faster than other DRL methods. In addition, the proposed meta-TRPO-based method can rapidly adapt to unseen jamming tasks with only a small quantity of training trajectories.</description><identifier>ISSN: 0018-9251</identifier><identifier>EISSN: 1557-9603</identifier><identifier>DOI: 10.1109/TAES.2024.3418944</identifier><identifier>CODEN: IEARAX</identifier><language>eng</language><publisher>New York: IEEE</publisher><subject>Aerospace and electronic systems ; Combinatorial analysis ; Decision making ; Deep learning ; Deep reinforcement learning ; Frequency hopping ; Jamming ; jamming against frequency-hopping spread spectrum (FHSS) ; meta-learning ; Neural networks ; Optimization ; Resource allocation ; Resource management ; Spread spectrum ; Task analysis ; Wireless sensor networks ; wireless sensor networks (WSNs)</subject><ispartof>IEEE transactions on aerospace and electronic systems, 2024-12, Vol.60 (6), p.7676-7693</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2024</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><orcidid>0000-0002-6664-8761 ; 0000-0003-3801-4187 ; 0009-0008-6340-5749 ; 0000-0003-3157-4886</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/10571360$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,780,784,796,27924,27925,54758</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/10571360$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Rao, Ning</creatorcontrib><creatorcontrib>Xu, Hua</creatorcontrib><creatorcontrib>Qi, Zisen</creatorcontrib><creatorcontrib>Wang, Dan</creatorcontrib><creatorcontrib>Zhang, Yue</creatorcontrib><title>Fast Adaptive Jamming Resource Allocation Against Frequency-Hopping Spread Spectrum in Wireless Sensor Networks via Meta-Deep-Reinforcement-Learning</title><title>IEEE transactions on aerospace and electronic systems</title><addtitle>T-AES</addtitle><description>Partial-band noise jamming is an important countermeasure against frequency-hopping spread spectrum technology in wireless sensor networks, and the jamming resource allocation (JRA) problem involved is a high-dimensional combinatorial optimization and also an NP-hard problem. Moreover, the users can dynamically alter their communication status, such as changing the channel spectrum distributions of hopping sets, posing additional difficulties for JRA. This article develops two methods to address the aforementioned challenges. First, a deep reinforcement learning (DRL)-based method is proposed for efficient JRA optimization, with the jamming scheme of each jamming node decided sequentially by the policy neural network, and its parameters are updated through trust region policy optimization (TRPO) within a trust region to ensure stable and fast convergence. Second, we propose a meta-TRPO-based method to improve the generalization capability of the policy network. After the meta-training process, it can update the meta-policy network with just a few fine-tuning steps and quickly obtain a task-specific policy for the fresh task. Extensive simulation results show that the proposed DRL-based method converges faster than other DRL methods. In addition, the proposed meta-TRPO-based method can rapidly adapt to unseen jamming tasks with only a small quantity of training trajectories.</description><subject>Aerospace and electronic systems</subject><subject>Combinatorial analysis</subject><subject>Decision making</subject><subject>Deep learning</subject><subject>Deep reinforcement learning</subject><subject>Frequency hopping</subject><subject>Jamming</subject><subject>jamming against frequency-hopping spread spectrum (FHSS)</subject><subject>meta-learning</subject><subject>Neural networks</subject><subject>Optimization</subject><subject>Resource allocation</subject><subject>Resource management</subject><subject>Spread spectrum</subject><subject>Task analysis</subject><subject>Wireless sensor networks</subject><subject>wireless sensor networks (WSNs)</subject><issn>0018-9251</issn><issn>1557-9603</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNpNkE1OwzAQhS0EEqVwACQWlli7eBI7TZYRUH5UQGqLWEZuOqlcGjvYblHvwYFxVRasRqP5Zt68R8gl8AEAL25m5f10kPBEDFIBeSHEEemBlENWZDw9Jj3OIWdFIuGUnHm_iq3IRdojPyPlAy0Xqgt6i_RZta02SzpBbzeuRlqu17ZWQVtDy6XSJsIjh18bNPWOPdqu29PTzqFaxIJ1cJuWakM_tMM1ek-naLx19BXDt3Wfnm61oi8YFLtD7NgEtWlsFGrRBDZG5Uw8eE5OGrX2ePFX--R9dD-7fWTjt4en23LMahhmgakkLRouhRASGgl5lskCQUTD-TyfJ0OhICKikJLnRS3TRZLV8xxiLHMuszjqk-vD3c7ZaMmHahVdmyhZpSAggyiTRQoOVO2s9w6bqnO6VW5XAa_24Vf78Kt9-NVf-HHn6rCjEfEfL4eQxv9-AU8wgRA</recordid><startdate>20241201</startdate><enddate>20241201</enddate><creator>Rao, Ning</creator><creator>Xu, Hua</creator><creator>Qi, Zisen</creator><creator>Wang, Dan</creator><creator>Zhang, Yue</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SP</scope><scope>7TB</scope><scope>8FD</scope><scope>FR3</scope><scope>H8D</scope><scope>L7M</scope><orcidid>https://orcid.org/0000-0002-6664-8761</orcidid><orcidid>https://orcid.org/0000-0003-3801-4187</orcidid><orcidid>https://orcid.org/0009-0008-6340-5749</orcidid><orcidid>https://orcid.org/0000-0003-3157-4886</orcidid></search><sort><creationdate>20241201</creationdate><title>Fast Adaptive Jamming Resource Allocation Against Frequency-Hopping Spread Spectrum in Wireless Sensor Networks via Meta-Deep-Reinforcement-Learning</title><author>Rao, Ning ; Xu, Hua ; Qi, Zisen ; Wang, Dan ; Zhang, Yue</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c176t-a239f0544451f5186659e146038b8b274a1a234955089c53d26cb81155b056a23</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Aerospace and electronic systems</topic><topic>Combinatorial analysis</topic><topic>Decision making</topic><topic>Deep learning</topic><topic>Deep reinforcement learning</topic><topic>Frequency hopping</topic><topic>Jamming</topic><topic>jamming against frequency-hopping spread spectrum (FHSS)</topic><topic>meta-learning</topic><topic>Neural networks</topic><topic>Optimization</topic><topic>Resource allocation</topic><topic>Resource management</topic><topic>Spread spectrum</topic><topic>Task analysis</topic><topic>Wireless sensor networks</topic><topic>wireless sensor networks (WSNs)</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Rao, Ning</creatorcontrib><creatorcontrib>Xu, Hua</creatorcontrib><creatorcontrib>Qi, Zisen</creatorcontrib><creatorcontrib>Wang, Dan</creatorcontrib><creatorcontrib>Zhang, Yue</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Mechanical &amp; Transportation Engineering Abstracts</collection><collection>Technology Research Database</collection><collection>Engineering Research Database</collection><collection>Aerospace Database</collection><collection>Advanced Technologies Database with Aerospace</collection><jtitle>IEEE transactions on aerospace and electronic systems</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Rao, Ning</au><au>Xu, Hua</au><au>Qi, Zisen</au><au>Wang, Dan</au><au>Zhang, Yue</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Fast Adaptive Jamming Resource Allocation Against Frequency-Hopping Spread Spectrum in Wireless Sensor Networks via Meta-Deep-Reinforcement-Learning</atitle><jtitle>IEEE transactions on aerospace and electronic systems</jtitle><stitle>T-AES</stitle><date>2024-12-01</date><risdate>2024</risdate><volume>60</volume><issue>6</issue><spage>7676</spage><epage>7693</epage><pages>7676-7693</pages><issn>0018-9251</issn><eissn>1557-9603</eissn><coden>IEARAX</coden><abstract>Partial-band noise jamming is an important countermeasure against frequency-hopping spread spectrum technology in wireless sensor networks, and the jamming resource allocation (JRA) problem involved is a high-dimensional combinatorial optimization and also an NP-hard problem. Moreover, the users can dynamically alter their communication status, such as changing the channel spectrum distributions of hopping sets, posing additional difficulties for JRA. This article develops two methods to address the aforementioned challenges. First, a deep reinforcement learning (DRL)-based method is proposed for efficient JRA optimization, with the jamming scheme of each jamming node decided sequentially by the policy neural network, and its parameters are updated through trust region policy optimization (TRPO) within a trust region to ensure stable and fast convergence. Second, we propose a meta-TRPO-based method to improve the generalization capability of the policy network. After the meta-training process, it can update the meta-policy network with just a few fine-tuning steps and quickly obtain a task-specific policy for the fresh task. Extensive simulation results show that the proposed DRL-based method converges faster than other DRL methods. In addition, the proposed meta-TRPO-based method can rapidly adapt to unseen jamming tasks with only a small quantity of training trajectories.</abstract><cop>New York</cop><pub>IEEE</pub><doi>10.1109/TAES.2024.3418944</doi><tpages>18</tpages><orcidid>https://orcid.org/0000-0002-6664-8761</orcidid><orcidid>https://orcid.org/0000-0003-3801-4187</orcidid><orcidid>https://orcid.org/0009-0008-6340-5749</orcidid><orcidid>https://orcid.org/0000-0003-3157-4886</orcidid></addata></record>
fulltext fulltext_linktorsrc
identifier ISSN: 0018-9251
ispartof IEEE transactions on aerospace and electronic systems, 2024-12, Vol.60 (6), p.7676-7693
issn 0018-9251
1557-9603
language eng
recordid cdi_crossref_primary_10_1109_TAES_2024_3418944
source IEEE Electronic Library (IEL)
subjects Aerospace and electronic systems
Combinatorial analysis
Decision making
Deep learning
Deep reinforcement learning
Frequency hopping
Jamming
jamming against frequency-hopping spread spectrum (FHSS)
meta-learning
Neural networks
Optimization
Resource allocation
Resource management
Spread spectrum
Task analysis
Wireless sensor networks
wireless sensor networks (WSNs)
title Fast Adaptive Jamming Resource Allocation Against Frequency-Hopping Spread Spectrum in Wireless Sensor Networks via Meta-Deep-Reinforcement-Learning
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-01T20%3A58%3A29IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Fast%20Adaptive%20Jamming%20Resource%20Allocation%20Against%20Frequency-Hopping%20Spread%20Spectrum%20in%20Wireless%20Sensor%20Networks%20via%20Meta-Deep-Reinforcement-Learning&rft.jtitle=IEEE%20transactions%20on%20aerospace%20and%20electronic%20systems&rft.au=Rao,%20Ning&rft.date=2024-12-01&rft.volume=60&rft.issue=6&rft.spage=7676&rft.epage=7693&rft.pages=7676-7693&rft.issn=0018-9251&rft.eissn=1557-9603&rft.coden=IEARAX&rft_id=info:doi/10.1109/TAES.2024.3418944&rft_dat=%3Cproquest_RIE%3E3141611766%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=3141611766&rft_id=info:pmid/&rft_ieee_id=10571360&rfr_iscdi=true