Strategic maneuver and disruption with reinforcement learning approaches for multi-agent coordination

Reinforcement learning (RL) approaches can illuminate emergent behaviors that facilitate coordination across teams of agents as part of a multi-agent system (MAS), which can provide windows of opportunity in various military tasks. Technologically advancing adversaries pose substantial risks to a fr...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Journal of defense modeling and simulation 2023-10, Vol.20 (4), p.509-526
Hauptverfasser: Asher, Derrik E, Basak, Anjon, Fernandez, Rolando, Sharma, Piyush K, Zaroukian, Erin G, Hsu, Christopher D, Dorothy, Michael R, Mahre, Thomas, Galindo, Gerardo, Frerichs, Luke, Rogers, John, Fossaceca, John
Format: Artikel
Sprache:eng
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 526
container_issue 4
container_start_page 509
container_title Journal of defense modeling and simulation
container_volume 20
creator Asher, Derrik E
Basak, Anjon
Fernandez, Rolando
Sharma, Piyush K
Zaroukian, Erin G
Hsu, Christopher D
Dorothy, Michael R
Mahre, Thomas
Galindo, Gerardo
Frerichs, Luke
Rogers, John
Fossaceca, John
description Reinforcement learning (RL) approaches can illuminate emergent behaviors that facilitate coordination across teams of agents as part of a multi-agent system (MAS), which can provide windows of opportunity in various military tasks. Technologically advancing adversaries pose substantial risks to a friendly nation’s interests and resources. Superior resources alone are not enough to defeat adversaries in modern complex environments because adversaries create standoff in multiple domains against predictable military doctrine-based maneuvers. Therefore, as part of a defense strategy, friendly forces must use strategic maneuvers and disruption to gain superiority in complex multi-faceted domains, such as multi-domain operations (MDOs). One promising avenue for implementing strategic maneuver and disruption to gain superiority over adversaries is through coordination of MAS in future military operations. In this paper, we present overviews of prominent works in the RL domain with their strengths and weaknesses for overcoming the challenges associated with performing autonomous strategic maneuver and disruption in military contexts.
doi_str_mv 10.1177/15485129221104096
format Article
fullrecord <record><control><sourceid>sage_cross</sourceid><recordid>TN_cdi_crossref_primary_10_1177_15485129221104096</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sage_id>10.1177_15485129221104096</sage_id><sourcerecordid>10.1177_15485129221104096</sourcerecordid><originalsourceid>FETCH-LOGICAL-c284t-a0d50d92da66ac6b88693a33c2af4309da8b735f2ca7459c77ffdf7c67c8fc113</originalsourceid><addsrcrecordid>eNp9kEtLxDAUhYMoOD5-gLv8gY5J0zTJUgZfMOBCBXflTh4zGaZJSVLFf2_LuBNc3QP3fIfDQeiGkiWlQtxS3khOa1XXlJKGqPYELSjnomKSfJzOupHVbDhHFznvCeGNYmKB7GtJUOzWa9xDsOOnTRiCwcbnNA7Fx4C_fNnhZH1wMWnb21DwwUIKPmwxDEOKoHc24-mL-_FQfAXb2aNjTMYHmDOu0JmDQ7bXv_cSvT_cv62eqvXL4_Pqbl3pWjalAmI4Mao20Lag242UrWLAmK7BNYwoA3IjGHe1BtFwpYVwzjihW6Gl05SyS0SPuTrFnJN13ZB8D-m7o6Sbd-r-7DQxyyOTp97dPo4pTBX_AX4A1eprVA</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Strategic maneuver and disruption with reinforcement learning approaches for multi-agent coordination</title><source>SAGE Complete</source><creator>Asher, Derrik E ; Basak, Anjon ; Fernandez, Rolando ; Sharma, Piyush K ; Zaroukian, Erin G ; Hsu, Christopher D ; Dorothy, Michael R ; Mahre, Thomas ; Galindo, Gerardo ; Frerichs, Luke ; Rogers, John ; Fossaceca, John</creator><creatorcontrib>Asher, Derrik E ; Basak, Anjon ; Fernandez, Rolando ; Sharma, Piyush K ; Zaroukian, Erin G ; Hsu, Christopher D ; Dorothy, Michael R ; Mahre, Thomas ; Galindo, Gerardo ; Frerichs, Luke ; Rogers, John ; Fossaceca, John</creatorcontrib><description>Reinforcement learning (RL) approaches can illuminate emergent behaviors that facilitate coordination across teams of agents as part of a multi-agent system (MAS), which can provide windows of opportunity in various military tasks. Technologically advancing adversaries pose substantial risks to a friendly nation’s interests and resources. Superior resources alone are not enough to defeat adversaries in modern complex environments because adversaries create standoff in multiple domains against predictable military doctrine-based maneuvers. Therefore, as part of a defense strategy, friendly forces must use strategic maneuvers and disruption to gain superiority in complex multi-faceted domains, such as multi-domain operations (MDOs). One promising avenue for implementing strategic maneuver and disruption to gain superiority over adversaries is through coordination of MAS in future military operations. In this paper, we present overviews of prominent works in the RL domain with their strengths and weaknesses for overcoming the challenges associated with performing autonomous strategic maneuver and disruption in military contexts.</description><identifier>ISSN: 1548-5129</identifier><identifier>EISSN: 1557-380X</identifier><identifier>DOI: 10.1177/15485129221104096</identifier><language>eng</language><publisher>London, England: SAGE Publications</publisher><ispartof>Journal of defense modeling and simulation, 2023-10, Vol.20 (4), p.509-526</ispartof><rights>The Author(s) 2022</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c284t-a0d50d92da66ac6b88693a33c2af4309da8b735f2ca7459c77ffdf7c67c8fc113</citedby><cites>FETCH-LOGICAL-c284t-a0d50d92da66ac6b88693a33c2af4309da8b735f2ca7459c77ffdf7c67c8fc113</cites><orcidid>0000-0001-8427-038X ; 0000-0002-1217-8581</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://journals.sagepub.com/doi/pdf/10.1177/15485129221104096$$EPDF$$P50$$Gsage$$H</linktopdf><linktohtml>$$Uhttps://journals.sagepub.com/doi/10.1177/15485129221104096$$EHTML$$P50$$Gsage$$H</linktohtml><link.rule.ids>314,776,780,21798,27901,27902,43597,43598</link.rule.ids></links><search><creatorcontrib>Asher, Derrik E</creatorcontrib><creatorcontrib>Basak, Anjon</creatorcontrib><creatorcontrib>Fernandez, Rolando</creatorcontrib><creatorcontrib>Sharma, Piyush K</creatorcontrib><creatorcontrib>Zaroukian, Erin G</creatorcontrib><creatorcontrib>Hsu, Christopher D</creatorcontrib><creatorcontrib>Dorothy, Michael R</creatorcontrib><creatorcontrib>Mahre, Thomas</creatorcontrib><creatorcontrib>Galindo, Gerardo</creatorcontrib><creatorcontrib>Frerichs, Luke</creatorcontrib><creatorcontrib>Rogers, John</creatorcontrib><creatorcontrib>Fossaceca, John</creatorcontrib><title>Strategic maneuver and disruption with reinforcement learning approaches for multi-agent coordination</title><title>Journal of defense modeling and simulation</title><description>Reinforcement learning (RL) approaches can illuminate emergent behaviors that facilitate coordination across teams of agents as part of a multi-agent system (MAS), which can provide windows of opportunity in various military tasks. Technologically advancing adversaries pose substantial risks to a friendly nation’s interests and resources. Superior resources alone are not enough to defeat adversaries in modern complex environments because adversaries create standoff in multiple domains against predictable military doctrine-based maneuvers. Therefore, as part of a defense strategy, friendly forces must use strategic maneuvers and disruption to gain superiority in complex multi-faceted domains, such as multi-domain operations (MDOs). One promising avenue for implementing strategic maneuver and disruption to gain superiority over adversaries is through coordination of MAS in future military operations. In this paper, we present overviews of prominent works in the RL domain with their strengths and weaknesses for overcoming the challenges associated with performing autonomous strategic maneuver and disruption in military contexts.</description><issn>1548-5129</issn><issn>1557-380X</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><recordid>eNp9kEtLxDAUhYMoOD5-gLv8gY5J0zTJUgZfMOBCBXflTh4zGaZJSVLFf2_LuBNc3QP3fIfDQeiGkiWlQtxS3khOa1XXlJKGqPYELSjnomKSfJzOupHVbDhHFznvCeGNYmKB7GtJUOzWa9xDsOOnTRiCwcbnNA7Fx4C_fNnhZH1wMWnb21DwwUIKPmwxDEOKoHc24-mL-_FQfAXb2aNjTMYHmDOu0JmDQ7bXv_cSvT_cv62eqvXL4_Pqbl3pWjalAmI4Mao20Lag242UrWLAmK7BNYwoA3IjGHe1BtFwpYVwzjihW6Gl05SyS0SPuTrFnJN13ZB8D-m7o6Sbd-r-7DQxyyOTp97dPo4pTBX_AX4A1eprVA</recordid><startdate>202310</startdate><enddate>202310</enddate><creator>Asher, Derrik E</creator><creator>Basak, Anjon</creator><creator>Fernandez, Rolando</creator><creator>Sharma, Piyush K</creator><creator>Zaroukian, Erin G</creator><creator>Hsu, Christopher D</creator><creator>Dorothy, Michael R</creator><creator>Mahre, Thomas</creator><creator>Galindo, Gerardo</creator><creator>Frerichs, Luke</creator><creator>Rogers, John</creator><creator>Fossaceca, John</creator><general>SAGE Publications</general><scope>AAYXX</scope><scope>CITATION</scope><orcidid>https://orcid.org/0000-0001-8427-038X</orcidid><orcidid>https://orcid.org/0000-0002-1217-8581</orcidid></search><sort><creationdate>202310</creationdate><title>Strategic maneuver and disruption with reinforcement learning approaches for multi-agent coordination</title><author>Asher, Derrik E ; Basak, Anjon ; Fernandez, Rolando ; Sharma, Piyush K ; Zaroukian, Erin G ; Hsu, Christopher D ; Dorothy, Michael R ; Mahre, Thomas ; Galindo, Gerardo ; Frerichs, Luke ; Rogers, John ; Fossaceca, John</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c284t-a0d50d92da66ac6b88693a33c2af4309da8b735f2ca7459c77ffdf7c67c8fc113</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Asher, Derrik E</creatorcontrib><creatorcontrib>Basak, Anjon</creatorcontrib><creatorcontrib>Fernandez, Rolando</creatorcontrib><creatorcontrib>Sharma, Piyush K</creatorcontrib><creatorcontrib>Zaroukian, Erin G</creatorcontrib><creatorcontrib>Hsu, Christopher D</creatorcontrib><creatorcontrib>Dorothy, Michael R</creatorcontrib><creatorcontrib>Mahre, Thomas</creatorcontrib><creatorcontrib>Galindo, Gerardo</creatorcontrib><creatorcontrib>Frerichs, Luke</creatorcontrib><creatorcontrib>Rogers, John</creatorcontrib><creatorcontrib>Fossaceca, John</creatorcontrib><collection>CrossRef</collection><jtitle>Journal of defense modeling and simulation</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Asher, Derrik E</au><au>Basak, Anjon</au><au>Fernandez, Rolando</au><au>Sharma, Piyush K</au><au>Zaroukian, Erin G</au><au>Hsu, Christopher D</au><au>Dorothy, Michael R</au><au>Mahre, Thomas</au><au>Galindo, Gerardo</au><au>Frerichs, Luke</au><au>Rogers, John</au><au>Fossaceca, John</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Strategic maneuver and disruption with reinforcement learning approaches for multi-agent coordination</atitle><jtitle>Journal of defense modeling and simulation</jtitle><date>2023-10</date><risdate>2023</risdate><volume>20</volume><issue>4</issue><spage>509</spage><epage>526</epage><pages>509-526</pages><issn>1548-5129</issn><eissn>1557-380X</eissn><abstract>Reinforcement learning (RL) approaches can illuminate emergent behaviors that facilitate coordination across teams of agents as part of a multi-agent system (MAS), which can provide windows of opportunity in various military tasks. Technologically advancing adversaries pose substantial risks to a friendly nation’s interests and resources. Superior resources alone are not enough to defeat adversaries in modern complex environments because adversaries create standoff in multiple domains against predictable military doctrine-based maneuvers. Therefore, as part of a defense strategy, friendly forces must use strategic maneuvers and disruption to gain superiority in complex multi-faceted domains, such as multi-domain operations (MDOs). One promising avenue for implementing strategic maneuver and disruption to gain superiority over adversaries is through coordination of MAS in future military operations. In this paper, we present overviews of prominent works in the RL domain with their strengths and weaknesses for overcoming the challenges associated with performing autonomous strategic maneuver and disruption in military contexts.</abstract><cop>London, England</cop><pub>SAGE Publications</pub><doi>10.1177/15485129221104096</doi><tpages>18</tpages><orcidid>https://orcid.org/0000-0001-8427-038X</orcidid><orcidid>https://orcid.org/0000-0002-1217-8581</orcidid></addata></record>
fulltext fulltext
identifier ISSN: 1548-5129
ispartof Journal of defense modeling and simulation, 2023-10, Vol.20 (4), p.509-526
issn 1548-5129
1557-380X
language eng
recordid cdi_crossref_primary_10_1177_15485129221104096
source SAGE Complete
title Strategic maneuver and disruption with reinforcement learning approaches for multi-agent coordination
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-11T02%3A27%3A48IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-sage_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Strategic%20maneuver%20and%20disruption%20with%20reinforcement%20learning%20approaches%20for%20multi-agent%20coordination&rft.jtitle=Journal%20of%20defense%20modeling%20and%20simulation&rft.au=Asher,%20Derrik%20E&rft.date=2023-10&rft.volume=20&rft.issue=4&rft.spage=509&rft.epage=526&rft.pages=509-526&rft.issn=1548-5129&rft.eissn=1557-380X&rft_id=info:doi/10.1177/15485129221104096&rft_dat=%3Csage_cross%3E10.1177_15485129221104096%3C/sage_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rft_sage_id=10.1177_15485129221104096&rfr_iscdi=true