Acquisition of Inducing Policy in Collaborative Robot Navigation Based on Multiagent Deep Reinforcement Learning

To avoid inefficient movement or the freezing problem in crowded environments, we previously proposed a human-aware interactive navigation method that uses inducement, i.e., voice reminders or physical touch. However, the use of inducement largely depends on many factors, including human attributes,...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE access 2023-01, Vol.11, p.1-1
Hauptverfasser: Kamezaki, Mitsuhiro, Ong, Ryan, Sugano, Shigeki
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 1
container_issue
container_start_page 1
container_title IEEE access
container_volume 11
creator Kamezaki, Mitsuhiro
Ong, Ryan
Sugano, Shigeki
description To avoid inefficient movement or the freezing problem in crowded environments, we previously proposed a human-aware interactive navigation method that uses inducement, i.e., voice reminders or physical touch. However, the use of inducement largely depends on many factors, including human attributes, task contents, and environmental contexts. Thus, it is unrealistic to pre-design a set of parameters such as the coefficients in the cost function, personal space, and velocity in accordance with the situation. To understand and evaluate if inducement (voice reminder in this study) is effective and how and when it must be used, we propose to comprehend them through multiagent deep reinforcement learning in which the robot voluntarily acquires an inducing policy suitable for the situation. Specifically, we evaluate whether a voice reminder can improve the time to reach the goal by learning when the robot uses it. Results of simulation experiments with four different situations show that the robot could learn inducing policies suited for each situation, and the effectiveness of inducement is greatly improved in more congested and narrow situations.
doi_str_mv 10.1109/ACCESS.2023.3253513
format Article
fullrecord <record><control><sourceid>proquest_ieee_</sourceid><recordid>TN_cdi_ieee_primary_10061379</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>10061379</ieee_id><doaj_id>oai_doaj_org_article_e7048566e2584b4baa33125f68118b32</doaj_id><sourcerecordid>2786975172</sourcerecordid><originalsourceid>FETCH-LOGICAL-c475t-e405c1be990930a3c3451f2209ebcb853e8f0d24709ce5f26189934b07ab56763</originalsourceid><addsrcrecordid>eNpNUV1rGzEQPEoDNWl-Qfog6LMdSXv6enSvaWNw2pCkz0KS94zM-WTr7gL59znnTMm-7DDMzC5MUVwzumCMmptlVd0-PS045bAALkAw-FTMOJNmDgLk5w_4S3HVdTs6jh4poWbFYRmOQ-xiH1NLUk1W7WYIsd2Sh9TE8EpiS6rUNM6n7Pr4guQx-dSTP-4lbt276YfrcENGcD80fXRbbHvyE_FAHjG2dcoB9ydqjS63Y_DX4qJ2TYdX531Z_Pt1-1zdzdd_f6-q5XoeSiX6OZZUBObRGGqAOghQClZzTg364LUA1DXd8FJRE1DUXDJtDJSeKueFVBIui9WUu0luZw857l1-tclF-06kvLUu9zE0aFHRUgspkQtd-tI7B8C4qKVmTHvgY9b3KeuQ03HArre7NOR2fN9ypaVRgqmTCiZVyKnrMtb_rzJqT03ZqSl7asqemxpd3yZXRMQPDioZKANvGaWObw</addsrcrecordid><sourcetype>Open Website</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2786975172</pqid></control><display><type>article</type><title>Acquisition of Inducing Policy in Collaborative Robot Navigation Based on Multiagent Deep Reinforcement Learning</title><source>IEEE Open Access Journals</source><source>DOAJ Directory of Open Access Journals</source><source>Elektronische Zeitschriftenbibliothek - Frei zugängliche E-Journals</source><creator>Kamezaki, Mitsuhiro ; Ong, Ryan ; Sugano, Shigeki</creator><creatorcontrib>Kamezaki, Mitsuhiro ; Ong, Ryan ; Sugano, Shigeki</creatorcontrib><description>To avoid inefficient movement or the freezing problem in crowded environments, we previously proposed a human-aware interactive navigation method that uses inducement, i.e., voice reminders or physical touch. However, the use of inducement largely depends on many factors, including human attributes, task contents, and environmental contexts. Thus, it is unrealistic to pre-design a set of parameters such as the coefficients in the cost function, personal space, and velocity in accordance with the situation. To understand and evaluate if inducement (voice reminder in this study) is effective and how and when it must be used, we propose to comprehend them through multiagent deep reinforcement learning in which the robot voluntarily acquires an inducing policy suitable for the situation. Specifically, we evaluate whether a voice reminder can improve the time to reach the goal by learning when the robot uses it. Results of simulation experiments with four different situations show that the robot could learn inducing policies suited for each situation, and the effectiveness of inducement is greatly improved in more congested and narrow situations.</description><identifier>ISSN: 2169-3536</identifier><identifier>EISSN: 2169-3536</identifier><identifier>DOI: 10.1109/ACCESS.2023.3253513</identifier><identifier>CODEN: IAECCG</identifier><language>eng</language><publisher>Piscataway: IEEE</publisher><subject>Autonomous mobile robot ; Autonomous robots ; Collaborative robot navigation ; Cost function ; Deep learning ; Design parameters ; Freezing ; Inducing policy acquisition ; Mobile robots ; Multi-agent systems ; Multiagent deep reinforcement learning ; Multiagent systems ; Navigation ; Reinforcement learning ; Robot motion ; Robots ; Voice</subject><ispartof>IEEE access, 2023-01, Vol.11, p.1-1</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2023</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c475t-e405c1be990930a3c3451f2209ebcb853e8f0d24709ce5f26189934b07ab56763</citedby><cites>FETCH-LOGICAL-c475t-e405c1be990930a3c3451f2209ebcb853e8f0d24709ce5f26189934b07ab56763</cites><orcidid>0000-0002-4377-8993 ; 0000-0002-9331-2446</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/10061379$$EHTML$$P50$$Gieee$$Hfree_for_read</linktohtml><link.rule.ids>314,776,780,860,2096,27610,27901,27902,54908</link.rule.ids></links><search><creatorcontrib>Kamezaki, Mitsuhiro</creatorcontrib><creatorcontrib>Ong, Ryan</creatorcontrib><creatorcontrib>Sugano, Shigeki</creatorcontrib><title>Acquisition of Inducing Policy in Collaborative Robot Navigation Based on Multiagent Deep Reinforcement Learning</title><title>IEEE access</title><addtitle>Access</addtitle><description>To avoid inefficient movement or the freezing problem in crowded environments, we previously proposed a human-aware interactive navigation method that uses inducement, i.e., voice reminders or physical touch. However, the use of inducement largely depends on many factors, including human attributes, task contents, and environmental contexts. Thus, it is unrealistic to pre-design a set of parameters such as the coefficients in the cost function, personal space, and velocity in accordance with the situation. To understand and evaluate if inducement (voice reminder in this study) is effective and how and when it must be used, we propose to comprehend them through multiagent deep reinforcement learning in which the robot voluntarily acquires an inducing policy suitable for the situation. Specifically, we evaluate whether a voice reminder can improve the time to reach the goal by learning when the robot uses it. Results of simulation experiments with four different situations show that the robot could learn inducing policies suited for each situation, and the effectiveness of inducement is greatly improved in more congested and narrow situations.</description><subject>Autonomous mobile robot</subject><subject>Autonomous robots</subject><subject>Collaborative robot navigation</subject><subject>Cost function</subject><subject>Deep learning</subject><subject>Design parameters</subject><subject>Freezing</subject><subject>Inducing policy acquisition</subject><subject>Mobile robots</subject><subject>Multi-agent systems</subject><subject>Multiagent deep reinforcement learning</subject><subject>Multiagent systems</subject><subject>Navigation</subject><subject>Reinforcement learning</subject><subject>Robot motion</subject><subject>Robots</subject><subject>Voice</subject><issn>2169-3536</issn><issn>2169-3536</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>ESBDL</sourceid><sourceid>RIE</sourceid><sourceid>DOA</sourceid><recordid>eNpNUV1rGzEQPEoDNWl-Qfog6LMdSXv6enSvaWNw2pCkz0KS94zM-WTr7gL59znnTMm-7DDMzC5MUVwzumCMmptlVd0-PS045bAALkAw-FTMOJNmDgLk5w_4S3HVdTs6jh4poWbFYRmOQ-xiH1NLUk1W7WYIsd2Sh9TE8EpiS6rUNM6n7Pr4guQx-dSTP-4lbt276YfrcENGcD80fXRbbHvyE_FAHjG2dcoB9ydqjS63Y_DX4qJ2TYdX531Z_Pt1-1zdzdd_f6-q5XoeSiX6OZZUBObRGGqAOghQClZzTg364LUA1DXd8FJRE1DUXDJtDJSeKueFVBIui9WUu0luZw857l1-tclF-06kvLUu9zE0aFHRUgspkQtd-tI7B8C4qKVmTHvgY9b3KeuQ03HArre7NOR2fN9ypaVRgqmTCiZVyKnrMtb_rzJqT03ZqSl7asqemxpd3yZXRMQPDioZKANvGaWObw</recordid><startdate>20230101</startdate><enddate>20230101</enddate><creator>Kamezaki, Mitsuhiro</creator><creator>Ong, Ryan</creator><creator>Sugano, Shigeki</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>ESBDL</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>7SR</scope><scope>8BQ</scope><scope>8FD</scope><scope>JG9</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>DOA</scope><orcidid>https://orcid.org/0000-0002-4377-8993</orcidid><orcidid>https://orcid.org/0000-0002-9331-2446</orcidid></search><sort><creationdate>20230101</creationdate><title>Acquisition of Inducing Policy in Collaborative Robot Navigation Based on Multiagent Deep Reinforcement Learning</title><author>Kamezaki, Mitsuhiro ; Ong, Ryan ; Sugano, Shigeki</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c475t-e405c1be990930a3c3451f2209ebcb853e8f0d24709ce5f26189934b07ab56763</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Autonomous mobile robot</topic><topic>Autonomous robots</topic><topic>Collaborative robot navigation</topic><topic>Cost function</topic><topic>Deep learning</topic><topic>Design parameters</topic><topic>Freezing</topic><topic>Inducing policy acquisition</topic><topic>Mobile robots</topic><topic>Multi-agent systems</topic><topic>Multiagent deep reinforcement learning</topic><topic>Multiagent systems</topic><topic>Navigation</topic><topic>Reinforcement learning</topic><topic>Robot motion</topic><topic>Robots</topic><topic>Voice</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Kamezaki, Mitsuhiro</creatorcontrib><creatorcontrib>Ong, Ryan</creatorcontrib><creatorcontrib>Sugano, Shigeki</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE Open Access Journals</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Engineered Materials Abstracts</collection><collection>METADEX</collection><collection>Technology Research Database</collection><collection>Materials Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>DOAJ Directory of Open Access Journals</collection><jtitle>IEEE access</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Kamezaki, Mitsuhiro</au><au>Ong, Ryan</au><au>Sugano, Shigeki</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Acquisition of Inducing Policy in Collaborative Robot Navigation Based on Multiagent Deep Reinforcement Learning</atitle><jtitle>IEEE access</jtitle><stitle>Access</stitle><date>2023-01-01</date><risdate>2023</risdate><volume>11</volume><spage>1</spage><epage>1</epage><pages>1-1</pages><issn>2169-3536</issn><eissn>2169-3536</eissn><coden>IAECCG</coden><abstract>To avoid inefficient movement or the freezing problem in crowded environments, we previously proposed a human-aware interactive navigation method that uses inducement, i.e., voice reminders or physical touch. However, the use of inducement largely depends on many factors, including human attributes, task contents, and environmental contexts. Thus, it is unrealistic to pre-design a set of parameters such as the coefficients in the cost function, personal space, and velocity in accordance with the situation. To understand and evaluate if inducement (voice reminder in this study) is effective and how and when it must be used, we propose to comprehend them through multiagent deep reinforcement learning in which the robot voluntarily acquires an inducing policy suitable for the situation. Specifically, we evaluate whether a voice reminder can improve the time to reach the goal by learning when the robot uses it. Results of simulation experiments with four different situations show that the robot could learn inducing policies suited for each situation, and the effectiveness of inducement is greatly improved in more congested and narrow situations.</abstract><cop>Piscataway</cop><pub>IEEE</pub><doi>10.1109/ACCESS.2023.3253513</doi><tpages>1</tpages><orcidid>https://orcid.org/0000-0002-4377-8993</orcidid><orcidid>https://orcid.org/0000-0002-9331-2446</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 2169-3536
ispartof IEEE access, 2023-01, Vol.11, p.1-1
issn 2169-3536
2169-3536
language eng
recordid cdi_ieee_primary_10061379
source IEEE Open Access Journals; DOAJ Directory of Open Access Journals; Elektronische Zeitschriftenbibliothek - Frei zugängliche E-Journals
subjects Autonomous mobile robot
Autonomous robots
Collaborative robot navigation
Cost function
Deep learning
Design parameters
Freezing
Inducing policy acquisition
Mobile robots
Multi-agent systems
Multiagent deep reinforcement learning
Multiagent systems
Navigation
Reinforcement learning
Robot motion
Robots
Voice
title Acquisition of Inducing Policy in Collaborative Robot Navigation Based on Multiagent Deep Reinforcement Learning
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-09T01%3A54%3A15IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_ieee_&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Acquisition%20of%20Inducing%20Policy%20in%20Collaborative%20Robot%20Navigation%20Based%20on%20Multiagent%20Deep%20Reinforcement%20Learning&rft.jtitle=IEEE%20access&rft.au=Kamezaki,%20Mitsuhiro&rft.date=2023-01-01&rft.volume=11&rft.spage=1&rft.epage=1&rft.pages=1-1&rft.issn=2169-3536&rft.eissn=2169-3536&rft.coden=IAECCG&rft_id=info:doi/10.1109/ACCESS.2023.3253513&rft_dat=%3Cproquest_ieee_%3E2786975172%3C/proquest_ieee_%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2786975172&rft_id=info:pmid/&rft_ieee_id=10061379&rft_doaj_id=oai_doaj_org_article_e7048566e2584b4baa33125f68118b32&rfr_iscdi=true