MIRACLE: Inverse Reinforcement and Curriculum Learning Model for Human-inspired Mobile Robot Navigation

In emergency scenarios, mobile robots must navigate like humans, interpreting stimuli to locate potential victims rapidly without interfering with first responders. Existing socially-aware navigation algorithms face computational and adaptability challenges. To overcome these, we propose a solution,...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2023-12
Hauptverfasser: Gunukula, Nihal, Tiwari, Kshitij, Bera, Aniket
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Gunukula, Nihal
Tiwari, Kshitij
Bera, Aniket
description In emergency scenarios, mobile robots must navigate like humans, interpreting stimuli to locate potential victims rapidly without interfering with first responders. Existing socially-aware navigation algorithms face computational and adaptability challenges. To overcome these, we propose a solution, MIRACLE -- an inverse reinforcement and curriculum learning model, that employs gamified learning to gather stimuli-driven human navigational data. This data is then used to train a Deep Inverse Maximum Entropy Reinforcement Learning model, reducing reliance on demonstrator abilities. Testing reveals a low loss of 2.7717 within a 400-sized environment, signifying human-like response replication. Current databases lack comprehensive stimuli-driven data, necessitating our approach. By doing so, we enable robots to navigate emergency situations with human-like perception, enhancing their life-saving capabilities.
format Article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2899513252</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2899513252</sourcerecordid><originalsourceid>FETCH-proquest_journals_28995132523</originalsourceid><addsrcrecordid>eNqNjMEKgkAUAJcgSMp_eNBZ0N0s7RZSJFQH6S6rPmVF39au2_fnoQ_oNIcZZsE8LkQUJDvOV8y3tg_DkO8PPI6Fx7p7Xpyy2_kIOX3QWIQCFbXa1DgiTSCpgcwZo2o3uBFuKA0p6uCuGxxg7uDqRkmBIvtSBptZVGqYL7rSEzzkR3VyUpo2bNnKwaL_45ptL-dndg1eRr8d2qnstTM0q5InaRpHgsdc_Fd9AXTiRsQ</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2899513252</pqid></control><display><type>article</type><title>MIRACLE: Inverse Reinforcement and Curriculum Learning Model for Human-inspired Mobile Robot Navigation</title><source>Free E- Journals</source><creator>Gunukula, Nihal ; Tiwari, Kshitij ; Bera, Aniket</creator><creatorcontrib>Gunukula, Nihal ; Tiwari, Kshitij ; Bera, Aniket</creatorcontrib><description>In emergency scenarios, mobile robots must navigate like humans, interpreting stimuli to locate potential victims rapidly without interfering with first responders. Existing socially-aware navigation algorithms face computational and adaptability challenges. To overcome these, we propose a solution, MIRACLE -- an inverse reinforcement and curriculum learning model, that employs gamified learning to gather stimuli-driven human navigational data. This data is then used to train a Deep Inverse Maximum Entropy Reinforcement Learning model, reducing reliance on demonstrator abilities. Testing reveals a low loss of 2.7717 within a 400-sized environment, signifying human-like response replication. Current databases lack comprehensive stimuli-driven data, necessitating our approach. By doing so, we enable robots to navigate emergency situations with human-like perception, enhancing their life-saving capabilities.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Algorithms ; Curricula ; Emergency response ; Maximum entropy ; Navigation ; Robots ; Stimuli</subject><ispartof>arXiv.org, 2023-12</ispartof><rights>2023. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>776,780</link.rule.ids></links><search><creatorcontrib>Gunukula, Nihal</creatorcontrib><creatorcontrib>Tiwari, Kshitij</creatorcontrib><creatorcontrib>Bera, Aniket</creatorcontrib><title>MIRACLE: Inverse Reinforcement and Curriculum Learning Model for Human-inspired Mobile Robot Navigation</title><title>arXiv.org</title><description>In emergency scenarios, mobile robots must navigate like humans, interpreting stimuli to locate potential victims rapidly without interfering with first responders. Existing socially-aware navigation algorithms face computational and adaptability challenges. To overcome these, we propose a solution, MIRACLE -- an inverse reinforcement and curriculum learning model, that employs gamified learning to gather stimuli-driven human navigational data. This data is then used to train a Deep Inverse Maximum Entropy Reinforcement Learning model, reducing reliance on demonstrator abilities. Testing reveals a low loss of 2.7717 within a 400-sized environment, signifying human-like response replication. Current databases lack comprehensive stimuli-driven data, necessitating our approach. By doing so, we enable robots to navigate emergency situations with human-like perception, enhancing their life-saving capabilities.</description><subject>Algorithms</subject><subject>Curricula</subject><subject>Emergency response</subject><subject>Maximum entropy</subject><subject>Navigation</subject><subject>Robots</subject><subject>Stimuli</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>BENPR</sourceid><recordid>eNqNjMEKgkAUAJcgSMp_eNBZ0N0s7RZSJFQH6S6rPmVF39au2_fnoQ_oNIcZZsE8LkQUJDvOV8y3tg_DkO8PPI6Fx7p7Xpyy2_kIOX3QWIQCFbXa1DgiTSCpgcwZo2o3uBFuKA0p6uCuGxxg7uDqRkmBIvtSBptZVGqYL7rSEzzkR3VyUpo2bNnKwaL_45ptL-dndg1eRr8d2qnstTM0q5InaRpHgsdc_Fd9AXTiRsQ</recordid><startdate>20231207</startdate><enddate>20231207</enddate><creator>Gunukula, Nihal</creator><creator>Tiwari, Kshitij</creator><creator>Bera, Aniket</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20231207</creationdate><title>MIRACLE: Inverse Reinforcement and Curriculum Learning Model for Human-inspired Mobile Robot Navigation</title><author>Gunukula, Nihal ; Tiwari, Kshitij ; Bera, Aniket</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_28995132523</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Algorithms</topic><topic>Curricula</topic><topic>Emergency response</topic><topic>Maximum entropy</topic><topic>Navigation</topic><topic>Robots</topic><topic>Stimuli</topic><toplevel>online_resources</toplevel><creatorcontrib>Gunukula, Nihal</creatorcontrib><creatorcontrib>Tiwari, Kshitij</creatorcontrib><creatorcontrib>Bera, Aniket</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Gunukula, Nihal</au><au>Tiwari, Kshitij</au><au>Bera, Aniket</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>MIRACLE: Inverse Reinforcement and Curriculum Learning Model for Human-inspired Mobile Robot Navigation</atitle><jtitle>arXiv.org</jtitle><date>2023-12-07</date><risdate>2023</risdate><eissn>2331-8422</eissn><abstract>In emergency scenarios, mobile robots must navigate like humans, interpreting stimuli to locate potential victims rapidly without interfering with first responders. Existing socially-aware navigation algorithms face computational and adaptability challenges. To overcome these, we propose a solution, MIRACLE -- an inverse reinforcement and curriculum learning model, that employs gamified learning to gather stimuli-driven human navigational data. This data is then used to train a Deep Inverse Maximum Entropy Reinforcement Learning model, reducing reliance on demonstrator abilities. Testing reveals a low loss of 2.7717 within a 400-sized environment, signifying human-like response replication. Current databases lack comprehensive stimuli-driven data, necessitating our approach. By doing so, we enable robots to navigate emergency situations with human-like perception, enhancing their life-saving capabilities.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2023-12
issn 2331-8422
language eng
recordid cdi_proquest_journals_2899513252
source Free E- Journals
subjects Algorithms
Curricula
Emergency response
Maximum entropy
Navigation
Robots
Stimuli
title MIRACLE: Inverse Reinforcement and Curriculum Learning Model for Human-inspired Mobile Robot Navigation
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-28T01%3A14%3A42IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=MIRACLE:%20Inverse%20Reinforcement%20and%20Curriculum%20Learning%20Model%20for%20Human-inspired%20Mobile%20Robot%20Navigation&rft.jtitle=arXiv.org&rft.au=Gunukula,%20Nihal&rft.date=2023-12-07&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2899513252%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2899513252&rft_id=info:pmid/&rfr_iscdi=true