Learning Multipursuit Evasion for Safe Targeted Navigation of Drones
Safe navigation of drones in the presence of adversarial physical attacks from multiple pursuers is a challenging task. This article proposes a novel approach, asynchronous multistage deep reinforcement learning (AMS-DRL), to train adversarial neural networks that can learn from the actions of multi...
Gespeichert in:
Veröffentlicht in: | IEEE transactions on artificial intelligence 2024-12, Vol.5 (12), p.6210-6224 |
---|---|
Hauptverfasser: | , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 6224 |
---|---|
container_issue | 12 |
container_start_page | 6210 |
container_title | IEEE transactions on artificial intelligence |
container_volume | 5 |
creator | Xiao, Jiaping Feroskhan, Mir |
description | Safe navigation of drones in the presence of adversarial physical attacks from multiple pursuers is a challenging task. This article proposes a novel approach, asynchronous multistage deep reinforcement learning (AMS-DRL), to train adversarial neural networks that can learn from the actions of multiple evolved pursuers and adapt quickly to their behavior, enabling the drone to avoid attacks and reach its target. Specifically, AMS-DRL evolves adversarial agents in a pursuit-evasion game (PEG) where the pursuers and the evader are asynchronously trained in a bipartite graph way during multiple stages. Our approach guarantees convergence by ensuring Nash equilibrium (NE) among agents from the game-theory analysis. We evaluate our method in extensive simulations and show that it outperforms baselines with higher navigation success rates (SRs). We also analyze how parameters such as the relative maximum speed affect navigation performance. Furthermore, we have conducted physical experiments and validated the effectiveness of the trained policies in real-time flights. A SR heatmap is introduced to elucidate how spatial geometry influences navigation outcomes. |
doi_str_mv | 10.1109/TAI.2024.3366871 |
format | Article |
fullrecord | <record><control><sourceid>crossref_RIE</sourceid><recordid>TN_cdi_ieee_primary_10439240</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>10439240</ieee_id><sourcerecordid>10_1109_TAI_2024_3366871</sourcerecordid><originalsourceid>FETCH-LOGICAL-c620-abb7078caec9a47a80c419d9be2b1defff5d68a96360dd0fe721f2b18909b1693</originalsourceid><addsrcrecordid>eNpNkL1ugzAUha2qlRql2Tt08AtArw0YPEZJ2kSi7VB2dIFr5CqFyIZIffuCkiHTOdL5GT7GngWEQoB-LdaHUIKMwyhSKkvFHVtIpUUQJ5m4v_GPbOX9DwDIREgp0wXb5oSus13LP8bjYE-j86Md-O6M3vYdN73j32iIF-haGqjhn3i2LQ5z2Bu-dX1H_ok9GDx6Wl11yYq3XbHZB_nX-2GzzoNaSQiwqlJIsxqp1hinmEEdC93oimQlGjLGJI3KUKtIQdOAoVQKM0WZBl0JpaMlg8tt7XrvHZny5Owvur9SQDlzKCcO5cyhvHKYJi-XiSWim3ocaRlD9A9kfVm5</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Learning Multipursuit Evasion for Safe Targeted Navigation of Drones</title><source>IEEE Electronic Library (IEL)</source><creator>Xiao, Jiaping ; Feroskhan, Mir</creator><creatorcontrib>Xiao, Jiaping ; Feroskhan, Mir</creatorcontrib><description>Safe navigation of drones in the presence of adversarial physical attacks from multiple pursuers is a challenging task. This article proposes a novel approach, asynchronous multistage deep reinforcement learning (AMS-DRL), to train adversarial neural networks that can learn from the actions of multiple evolved pursuers and adapt quickly to their behavior, enabling the drone to avoid attacks and reach its target. Specifically, AMS-DRL evolves adversarial agents in a pursuit-evasion game (PEG) where the pursuers and the evader are asynchronously trained in a bipartite graph way during multiple stages. Our approach guarantees convergence by ensuring Nash equilibrium (NE) among agents from the game-theory analysis. We evaluate our method in extensive simulations and show that it outperforms baselines with higher navigation success rates (SRs). We also analyze how parameters such as the relative maximum speed affect navigation performance. Furthermore, we have conducted physical experiments and validated the effectiveness of the trained policies in real-time flights. A SR heatmap is introduced to elucidate how spatial geometry influences navigation outcomes.</description><identifier>ISSN: 2691-4581</identifier><identifier>EISSN: 2691-4581</identifier><identifier>DOI: 10.1109/TAI.2024.3366871</identifier><identifier>CODEN: ITAICB</identifier><language>eng</language><publisher>IEEE</publisher><subject>Collision avoidance ; Deep reinforcement learning ; Deep reinforcement learning (DRL) ; Drones ; Game theory ; Games ; Multi-agent systems ; multiagent systems ; Nash equilibrium ; Navigation ; pursuit-evasion game (PEG) ; safe targeted navigation</subject><ispartof>IEEE transactions on artificial intelligence, 2024-12, Vol.5 (12), p.6210-6224</ispartof><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c620-abb7078caec9a47a80c419d9be2b1defff5d68a96360dd0fe721f2b18909b1693</cites><orcidid>0000-0003-2888-5210 ; 0000-0002-2889-7222</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/10439240$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,776,780,792,27901,27902,54733</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/10439240$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Xiao, Jiaping</creatorcontrib><creatorcontrib>Feroskhan, Mir</creatorcontrib><title>Learning Multipursuit Evasion for Safe Targeted Navigation of Drones</title><title>IEEE transactions on artificial intelligence</title><addtitle>TAI</addtitle><description>Safe navigation of drones in the presence of adversarial physical attacks from multiple pursuers is a challenging task. This article proposes a novel approach, asynchronous multistage deep reinforcement learning (AMS-DRL), to train adversarial neural networks that can learn from the actions of multiple evolved pursuers and adapt quickly to their behavior, enabling the drone to avoid attacks and reach its target. Specifically, AMS-DRL evolves adversarial agents in a pursuit-evasion game (PEG) where the pursuers and the evader are asynchronously trained in a bipartite graph way during multiple stages. Our approach guarantees convergence by ensuring Nash equilibrium (NE) among agents from the game-theory analysis. We evaluate our method in extensive simulations and show that it outperforms baselines with higher navigation success rates (SRs). We also analyze how parameters such as the relative maximum speed affect navigation performance. Furthermore, we have conducted physical experiments and validated the effectiveness of the trained policies in real-time flights. A SR heatmap is introduced to elucidate how spatial geometry influences navigation outcomes.</description><subject>Collision avoidance</subject><subject>Deep reinforcement learning</subject><subject>Deep reinforcement learning (DRL)</subject><subject>Drones</subject><subject>Game theory</subject><subject>Games</subject><subject>Multi-agent systems</subject><subject>multiagent systems</subject><subject>Nash equilibrium</subject><subject>Navigation</subject><subject>pursuit-evasion game (PEG)</subject><subject>safe targeted navigation</subject><issn>2691-4581</issn><issn>2691-4581</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNpNkL1ugzAUha2qlRql2Tt08AtArw0YPEZJ2kSi7VB2dIFr5CqFyIZIffuCkiHTOdL5GT7GngWEQoB-LdaHUIKMwyhSKkvFHVtIpUUQJ5m4v_GPbOX9DwDIREgp0wXb5oSus13LP8bjYE-j86Md-O6M3vYdN73j32iIF-haGqjhn3i2LQ5z2Bu-dX1H_ok9GDx6Wl11yYq3XbHZB_nX-2GzzoNaSQiwqlJIsxqp1hinmEEdC93oimQlGjLGJI3KUKtIQdOAoVQKM0WZBl0JpaMlg8tt7XrvHZny5Owvur9SQDlzKCcO5cyhvHKYJi-XiSWim3ocaRlD9A9kfVm5</recordid><startdate>202412</startdate><enddate>202412</enddate><creator>Xiao, Jiaping</creator><creator>Feroskhan, Mir</creator><general>IEEE</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><orcidid>https://orcid.org/0000-0003-2888-5210</orcidid><orcidid>https://orcid.org/0000-0002-2889-7222</orcidid></search><sort><creationdate>202412</creationdate><title>Learning Multipursuit Evasion for Safe Targeted Navigation of Drones</title><author>Xiao, Jiaping ; Feroskhan, Mir</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c620-abb7078caec9a47a80c419d9be2b1defff5d68a96360dd0fe721f2b18909b1693</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Collision avoidance</topic><topic>Deep reinforcement learning</topic><topic>Deep reinforcement learning (DRL)</topic><topic>Drones</topic><topic>Game theory</topic><topic>Games</topic><topic>Multi-agent systems</topic><topic>multiagent systems</topic><topic>Nash equilibrium</topic><topic>Navigation</topic><topic>pursuit-evasion game (PEG)</topic><topic>safe targeted navigation</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Xiao, Jiaping</creatorcontrib><creatorcontrib>Feroskhan, Mir</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><jtitle>IEEE transactions on artificial intelligence</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Xiao, Jiaping</au><au>Feroskhan, Mir</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Learning Multipursuit Evasion for Safe Targeted Navigation of Drones</atitle><jtitle>IEEE transactions on artificial intelligence</jtitle><stitle>TAI</stitle><date>2024-12</date><risdate>2024</risdate><volume>5</volume><issue>12</issue><spage>6210</spage><epage>6224</epage><pages>6210-6224</pages><issn>2691-4581</issn><eissn>2691-4581</eissn><coden>ITAICB</coden><abstract>Safe navigation of drones in the presence of adversarial physical attacks from multiple pursuers is a challenging task. This article proposes a novel approach, asynchronous multistage deep reinforcement learning (AMS-DRL), to train adversarial neural networks that can learn from the actions of multiple evolved pursuers and adapt quickly to their behavior, enabling the drone to avoid attacks and reach its target. Specifically, AMS-DRL evolves adversarial agents in a pursuit-evasion game (PEG) where the pursuers and the evader are asynchronously trained in a bipartite graph way during multiple stages. Our approach guarantees convergence by ensuring Nash equilibrium (NE) among agents from the game-theory analysis. We evaluate our method in extensive simulations and show that it outperforms baselines with higher navigation success rates (SRs). We also analyze how parameters such as the relative maximum speed affect navigation performance. Furthermore, we have conducted physical experiments and validated the effectiveness of the trained policies in real-time flights. A SR heatmap is introduced to elucidate how spatial geometry influences navigation outcomes.</abstract><pub>IEEE</pub><doi>10.1109/TAI.2024.3366871</doi><tpages>15</tpages><orcidid>https://orcid.org/0000-0003-2888-5210</orcidid><orcidid>https://orcid.org/0000-0002-2889-7222</orcidid></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | ISSN: 2691-4581 |
ispartof | IEEE transactions on artificial intelligence, 2024-12, Vol.5 (12), p.6210-6224 |
issn | 2691-4581 2691-4581 |
language | eng |
recordid | cdi_ieee_primary_10439240 |
source | IEEE Electronic Library (IEL) |
subjects | Collision avoidance Deep reinforcement learning Deep reinforcement learning (DRL) Drones Game theory Games Multi-agent systems multiagent systems Nash equilibrium Navigation pursuit-evasion game (PEG) safe targeted navigation |
title | Learning Multipursuit Evasion for Safe Targeted Navigation of Drones |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-08T00%3A47%3A26IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-crossref_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Learning%20Multipursuit%20Evasion%20for%20Safe%20Targeted%20Navigation%20of%20Drones&rft.jtitle=IEEE%20transactions%20on%20artificial%20intelligence&rft.au=Xiao,%20Jiaping&rft.date=2024-12&rft.volume=5&rft.issue=12&rft.spage=6210&rft.epage=6224&rft.pages=6210-6224&rft.issn=2691-4581&rft.eissn=2691-4581&rft.coden=ITAICB&rft_id=info:doi/10.1109/TAI.2024.3366871&rft_dat=%3Ccrossref_RIE%3E10_1109_TAI_2024_3366871%3C/crossref_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rft_ieee_id=10439240&rfr_iscdi=true |