Hierarchical Perception-Improving for Decentralized Multi-Robot Motion Planning in Complex Scenarios

Multi-robot cooperative navigation is an important task, which has been widely studied in many fields like logistics, transportation, and disaster rescue. However, most of the existing methods either require some strong assumptions or are validated in simple scenarios, which greatly hinders their im...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on intelligent transportation systems 2024-07, Vol.25 (7), p.6486-6500
Hauptverfasser: Jia, Yunjie, Song, Yong, Xiong, Bo, Cheng, Jiyu, Zhang, Wei, Yang, Simon X., Kwong, Sam
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 6500
container_issue 7
container_start_page 6486
container_title IEEE transactions on intelligent transportation systems
container_volume 25
creator Jia, Yunjie
Song, Yong
Xiong, Bo
Cheng, Jiyu
Zhang, Wei
Yang, Simon X.
Kwong, Sam
description Multi-robot cooperative navigation is an important task, which has been widely studied in many fields like logistics, transportation, and disaster rescue. However, most of the existing methods either require some strong assumptions or are validated in simple scenarios, which greatly hinders their implementation in the real world. In this paper, more complex environments are considered in which robots can only acquire local observations from their own sensors and have only limited communication capabilities for mapless collaborative navigation. To address this challenging task, we propose a hierarchical framework, by fusing both Sensor-wise and Agent-wise features for Perception-Improving (SAPI), which can adaptively integrate features from different information sources to improve perception capabilities. Specifically, to facilitate scene understanding, we assign prior knowledge to the visual coder to generate efficient embeddings. For effective feature representation, an attention-based sensor fusion network is designed to fuse sensor-level information of visual and LiDAR sensors, while graph convolution with multi-head attention mechanism is applied to aggregate agent-level information from an arbitrary number of neighbors. In addition, reinforcement learning is used to optimize the policy, where a novel compound reward function is introduced to guide training. Extensive experiments demonstrate that our method has excellent generalization ability in different scenarios and scalability for large-scale systems.
doi_str_mv 10.1109/TITS.2023.3344518
format Article
fullrecord <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_ieee_primary_10379540</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>10379540</ieee_id><sourcerecordid>3075427102</sourcerecordid><originalsourceid>FETCH-LOGICAL-c246t-838d17ca0e63f4b2bca22f93e9ebca53005cdc4c559590dac5dd3bc4900f22fb3</originalsourceid><addsrcrecordid>eNpNkE1Lw0AQhhdRsFZ_gOBhwXPq7FeTHKV-tNBisfW8bDYT3ZJm6yYV9de7oT14mmF43pnhIeSawYgxyO_Ws_VqxIGLkRBSKpadkAFTKksA2Pi077lMclBwTi7adhOnEWIDUk4dBhPsh7OmpksMFned800y2-6C_3LNO618oA9osemCqd0vlnSxrzuXvPrCd3The5wua9M0Pe0aOvHbXY3fdBUzJjjfXpKzytQtXh3rkLw9Pa4n02T-8jyb3M8Ty-W4SzKRlSy1BnAsKlnwwhrOq1xgjrFVAkDZ0kqrVK5yKI1VZSkKK3OAKoKFGJLbw974-uce205v_D408aQWkCrJUxYNDQk7UDb4tg1Y6V1wWxN-NAPdy9S9TN3L1EeZMXNzyDhE_MeLNFcSxB-twXJT</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3075427102</pqid></control><display><type>article</type><title>Hierarchical Perception-Improving for Decentralized Multi-Robot Motion Planning in Complex Scenarios</title><source>IEEE Electronic Library (IEL)</source><creator>Jia, Yunjie ; Song, Yong ; Xiong, Bo ; Cheng, Jiyu ; Zhang, Wei ; Yang, Simon X. ; Kwong, Sam</creator><creatorcontrib>Jia, Yunjie ; Song, Yong ; Xiong, Bo ; Cheng, Jiyu ; Zhang, Wei ; Yang, Simon X. ; Kwong, Sam</creatorcontrib><description>Multi-robot cooperative navigation is an important task, which has been widely studied in many fields like logistics, transportation, and disaster rescue. However, most of the existing methods either require some strong assumptions or are validated in simple scenarios, which greatly hinders their implementation in the real world. In this paper, more complex environments are considered in which robots can only acquire local observations from their own sensors and have only limited communication capabilities for mapless collaborative navigation. To address this challenging task, we propose a hierarchical framework, by fusing both Sensor-wise and Agent-wise features for Perception-Improving (SAPI), which can adaptively integrate features from different information sources to improve perception capabilities. Specifically, to facilitate scene understanding, we assign prior knowledge to the visual coder to generate efficient embeddings. For effective feature representation, an attention-based sensor fusion network is designed to fuse sensor-level information of visual and LiDAR sensors, while graph convolution with multi-head attention mechanism is applied to aggregate agent-level information from an arbitrary number of neighbors. In addition, reinforcement learning is used to optimize the policy, where a novel compound reward function is introduced to guide training. Extensive experiments demonstrate that our method has excellent generalization ability in different scenarios and scalability for large-scale systems.</description><identifier>ISSN: 1524-9050</identifier><identifier>EISSN: 1558-0016</identifier><identifier>DOI: 10.1109/TITS.2023.3344518</identifier><identifier>CODEN: ITISFG</identifier><language>eng</language><publisher>New York: IEEE</publisher><subject>Collision avoidance ; Deep reinforcement learning ; Effectiveness ; feature fusion ; Information sources ; Motion planning ; Multi-robot systems ; Multiple robots ; Multisensor fusion ; Navigation ; Perception ; Reagents ; Robot dynamics ; Robot kinematics ; Robot sensing systems ; Scene analysis ; Sensors ; Visualization</subject><ispartof>IEEE transactions on intelligent transportation systems, 2024-07, Vol.25 (7), p.6486-6500</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2024</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c246t-838d17ca0e63f4b2bca22f93e9ebca53005cdc4c559590dac5dd3bc4900f22fb3</cites><orcidid>0000-0001-7484-7261 ; 0000-0003-2505-2766 ; 0000-0001-5967-2125 ; 0000-0002-4960-3190 ; 0000-0002-8063-2547 ; 0000-0002-6888-7993</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/10379540$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,780,784,796,27924,27925,54758</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/10379540$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Jia, Yunjie</creatorcontrib><creatorcontrib>Song, Yong</creatorcontrib><creatorcontrib>Xiong, Bo</creatorcontrib><creatorcontrib>Cheng, Jiyu</creatorcontrib><creatorcontrib>Zhang, Wei</creatorcontrib><creatorcontrib>Yang, Simon X.</creatorcontrib><creatorcontrib>Kwong, Sam</creatorcontrib><title>Hierarchical Perception-Improving for Decentralized Multi-Robot Motion Planning in Complex Scenarios</title><title>IEEE transactions on intelligent transportation systems</title><addtitle>TITS</addtitle><description>Multi-robot cooperative navigation is an important task, which has been widely studied in many fields like logistics, transportation, and disaster rescue. However, most of the existing methods either require some strong assumptions or are validated in simple scenarios, which greatly hinders their implementation in the real world. In this paper, more complex environments are considered in which robots can only acquire local observations from their own sensors and have only limited communication capabilities for mapless collaborative navigation. To address this challenging task, we propose a hierarchical framework, by fusing both Sensor-wise and Agent-wise features for Perception-Improving (SAPI), which can adaptively integrate features from different information sources to improve perception capabilities. Specifically, to facilitate scene understanding, we assign prior knowledge to the visual coder to generate efficient embeddings. For effective feature representation, an attention-based sensor fusion network is designed to fuse sensor-level information of visual and LiDAR sensors, while graph convolution with multi-head attention mechanism is applied to aggregate agent-level information from an arbitrary number of neighbors. In addition, reinforcement learning is used to optimize the policy, where a novel compound reward function is introduced to guide training. Extensive experiments demonstrate that our method has excellent generalization ability in different scenarios and scalability for large-scale systems.</description><subject>Collision avoidance</subject><subject>Deep reinforcement learning</subject><subject>Effectiveness</subject><subject>feature fusion</subject><subject>Information sources</subject><subject>Motion planning</subject><subject>Multi-robot systems</subject><subject>Multiple robots</subject><subject>Multisensor fusion</subject><subject>Navigation</subject><subject>Perception</subject><subject>Reagents</subject><subject>Robot dynamics</subject><subject>Robot kinematics</subject><subject>Robot sensing systems</subject><subject>Scene analysis</subject><subject>Sensors</subject><subject>Visualization</subject><issn>1524-9050</issn><issn>1558-0016</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNpNkE1Lw0AQhhdRsFZ_gOBhwXPq7FeTHKV-tNBisfW8bDYT3ZJm6yYV9de7oT14mmF43pnhIeSawYgxyO_Ws_VqxIGLkRBSKpadkAFTKksA2Pi077lMclBwTi7adhOnEWIDUk4dBhPsh7OmpksMFned800y2-6C_3LNO618oA9osemCqd0vlnSxrzuXvPrCd3The5wua9M0Pe0aOvHbXY3fdBUzJjjfXpKzytQtXh3rkLw9Pa4n02T-8jyb3M8Ty-W4SzKRlSy1BnAsKlnwwhrOq1xgjrFVAkDZ0kqrVK5yKI1VZSkKK3OAKoKFGJLbw974-uce205v_D408aQWkCrJUxYNDQk7UDb4tg1Y6V1wWxN-NAPdy9S9TN3L1EeZMXNzyDhE_MeLNFcSxB-twXJT</recordid><startdate>20240701</startdate><enddate>20240701</enddate><creator>Jia, Yunjie</creator><creator>Song, Yong</creator><creator>Xiong, Bo</creator><creator>Cheng, Jiyu</creator><creator>Zhang, Wei</creator><creator>Yang, Simon X.</creator><creator>Kwong, Sam</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>8FD</scope><scope>FR3</scope><scope>JQ2</scope><scope>KR7</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><orcidid>https://orcid.org/0000-0001-7484-7261</orcidid><orcidid>https://orcid.org/0000-0003-2505-2766</orcidid><orcidid>https://orcid.org/0000-0001-5967-2125</orcidid><orcidid>https://orcid.org/0000-0002-4960-3190</orcidid><orcidid>https://orcid.org/0000-0002-8063-2547</orcidid><orcidid>https://orcid.org/0000-0002-6888-7993</orcidid></search><sort><creationdate>20240701</creationdate><title>Hierarchical Perception-Improving for Decentralized Multi-Robot Motion Planning in Complex Scenarios</title><author>Jia, Yunjie ; Song, Yong ; Xiong, Bo ; Cheng, Jiyu ; Zhang, Wei ; Yang, Simon X. ; Kwong, Sam</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c246t-838d17ca0e63f4b2bca22f93e9ebca53005cdc4c559590dac5dd3bc4900f22fb3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Collision avoidance</topic><topic>Deep reinforcement learning</topic><topic>Effectiveness</topic><topic>feature fusion</topic><topic>Information sources</topic><topic>Motion planning</topic><topic>Multi-robot systems</topic><topic>Multiple robots</topic><topic>Multisensor fusion</topic><topic>Navigation</topic><topic>Perception</topic><topic>Reagents</topic><topic>Robot dynamics</topic><topic>Robot kinematics</topic><topic>Robot sensing systems</topic><topic>Scene analysis</topic><topic>Sensors</topic><topic>Visualization</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Jia, Yunjie</creatorcontrib><creatorcontrib>Song, Yong</creatorcontrib><creatorcontrib>Xiong, Bo</creatorcontrib><creatorcontrib>Cheng, Jiyu</creatorcontrib><creatorcontrib>Zhang, Wei</creatorcontrib><creatorcontrib>Yang, Simon X.</creatorcontrib><creatorcontrib>Kwong, Sam</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Technology Research Database</collection><collection>Engineering Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Civil Engineering Abstracts</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>IEEE transactions on intelligent transportation systems</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Jia, Yunjie</au><au>Song, Yong</au><au>Xiong, Bo</au><au>Cheng, Jiyu</au><au>Zhang, Wei</au><au>Yang, Simon X.</au><au>Kwong, Sam</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Hierarchical Perception-Improving for Decentralized Multi-Robot Motion Planning in Complex Scenarios</atitle><jtitle>IEEE transactions on intelligent transportation systems</jtitle><stitle>TITS</stitle><date>2024-07-01</date><risdate>2024</risdate><volume>25</volume><issue>7</issue><spage>6486</spage><epage>6500</epage><pages>6486-6500</pages><issn>1524-9050</issn><eissn>1558-0016</eissn><coden>ITISFG</coden><abstract>Multi-robot cooperative navigation is an important task, which has been widely studied in many fields like logistics, transportation, and disaster rescue. However, most of the existing methods either require some strong assumptions or are validated in simple scenarios, which greatly hinders their implementation in the real world. In this paper, more complex environments are considered in which robots can only acquire local observations from their own sensors and have only limited communication capabilities for mapless collaborative navigation. To address this challenging task, we propose a hierarchical framework, by fusing both Sensor-wise and Agent-wise features for Perception-Improving (SAPI), which can adaptively integrate features from different information sources to improve perception capabilities. Specifically, to facilitate scene understanding, we assign prior knowledge to the visual coder to generate efficient embeddings. For effective feature representation, an attention-based sensor fusion network is designed to fuse sensor-level information of visual and LiDAR sensors, while graph convolution with multi-head attention mechanism is applied to aggregate agent-level information from an arbitrary number of neighbors. In addition, reinforcement learning is used to optimize the policy, where a novel compound reward function is introduced to guide training. Extensive experiments demonstrate that our method has excellent generalization ability in different scenarios and scalability for large-scale systems.</abstract><cop>New York</cop><pub>IEEE</pub><doi>10.1109/TITS.2023.3344518</doi><tpages>15</tpages><orcidid>https://orcid.org/0000-0001-7484-7261</orcidid><orcidid>https://orcid.org/0000-0003-2505-2766</orcidid><orcidid>https://orcid.org/0000-0001-5967-2125</orcidid><orcidid>https://orcid.org/0000-0002-4960-3190</orcidid><orcidid>https://orcid.org/0000-0002-8063-2547</orcidid><orcidid>https://orcid.org/0000-0002-6888-7993</orcidid></addata></record>
fulltext fulltext_linktorsrc
identifier ISSN: 1524-9050
ispartof IEEE transactions on intelligent transportation systems, 2024-07, Vol.25 (7), p.6486-6500
issn 1524-9050
1558-0016
language eng
recordid cdi_ieee_primary_10379540
source IEEE Electronic Library (IEL)
subjects Collision avoidance
Deep reinforcement learning
Effectiveness
feature fusion
Information sources
Motion planning
Multi-robot systems
Multiple robots
Multisensor fusion
Navigation
Perception
Reagents
Robot dynamics
Robot kinematics
Robot sensing systems
Scene analysis
Sensors
Visualization
title Hierarchical Perception-Improving for Decentralized Multi-Robot Motion Planning in Complex Scenarios
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-04T23%3A04%3A24IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Hierarchical%20Perception-Improving%20for%20Decentralized%20Multi-Robot%20Motion%20Planning%20in%20Complex%20Scenarios&rft.jtitle=IEEE%20transactions%20on%20intelligent%20transportation%20systems&rft.au=Jia,%20Yunjie&rft.date=2024-07-01&rft.volume=25&rft.issue=7&rft.spage=6486&rft.epage=6500&rft.pages=6486-6500&rft.issn=1524-9050&rft.eissn=1558-0016&rft.coden=ITISFG&rft_id=info:doi/10.1109/TITS.2023.3344518&rft_dat=%3Cproquest_RIE%3E3075427102%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=3075427102&rft_id=info:pmid/&rft_ieee_id=10379540&rfr_iscdi=true