Model-free Neural Lyapunov Control for Safe Robot Navigation

Model-free Deep Reinforcement Learning (DRL) controllers have demonstrated promising results on various challenging non-linear control tasks. While a model-free DRL algorithm can solve unknown dynamics and high-dimensional problems, it lacks safety assurance. Although safety constraints can be encod...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Xiong, Zikang, Eappen, Joe, Qureshi, Ahmed H, Jagannathan, Suresh
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Xiong, Zikang
Eappen, Joe
Qureshi, Ahmed H
Jagannathan, Suresh
description Model-free Deep Reinforcement Learning (DRL) controllers have demonstrated promising results on various challenging non-linear control tasks. While a model-free DRL algorithm can solve unknown dynamics and high-dimensional problems, it lacks safety assurance. Although safety constraints can be encoded as part of a reward function, there still exists a large gap between an RL controller trained with this modified reward and a safe controller. In contrast, instead of implicitly encoding safety constraints with rewards, we explicitly co-learn a Twin Neural Lyapunov Function (TNLF) with the control policy in the DRL training loop and use the learned TNLF to build a runtime monitor. Combined with the path generated from a planner, the monitor chooses appropriate waypoints that guide the learned controller to provide collision-free control trajectories. Our approach inherits the scalability advantages from DRL while enhancing safety guarantees. Our experimental evaluation demonstrates the effectiveness of our approach compared to DRL with augmented rewards and constrained DRL methods over a range of high-dimensional safety-sensitive navigation tasks.
doi_str_mv 10.48550/arxiv.2203.01190
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2203_01190</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2203_01190</sourcerecordid><originalsourceid>FETCH-LOGICAL-a670-bfe87b11022080ea646e765405d8d868148a7f1013c279c3b94dd613b2f062ff3</originalsourceid><addsrcrecordid>eNotj7FOwzAURb0woNIPYMI_kPCe7diOxIIiCkhpK0H36Lm2UaQQVyaN6N8DpdPZ7rmHsVuEUtmqgnvK3_1cCgGyBMQartnDOvkwFDGHwDfhmGng7YkOxzHNvEnjlNPAY8r8nWLgb8mliW9o7j9o6tN4w64iDV9heeGC7VZPu-alaLfPr81jW5A2ULgYrHGI8Ou1EEgrHYyuFFTeeqstKksmIqDcC1PvpauV9xqlExG0iFEu2N3_7Pl-d8j9J-VT95fRnTPkD0psQQU</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Model-free Neural Lyapunov Control for Safe Robot Navigation</title><source>arXiv.org</source><creator>Xiong, Zikang ; Eappen, Joe ; Qureshi, Ahmed H ; Jagannathan, Suresh</creator><creatorcontrib>Xiong, Zikang ; Eappen, Joe ; Qureshi, Ahmed H ; Jagannathan, Suresh</creatorcontrib><description>Model-free Deep Reinforcement Learning (DRL) controllers have demonstrated promising results on various challenging non-linear control tasks. While a model-free DRL algorithm can solve unknown dynamics and high-dimensional problems, it lacks safety assurance. Although safety constraints can be encoded as part of a reward function, there still exists a large gap between an RL controller trained with this modified reward and a safe controller. In contrast, instead of implicitly encoding safety constraints with rewards, we explicitly co-learn a Twin Neural Lyapunov Function (TNLF) with the control policy in the DRL training loop and use the learned TNLF to build a runtime monitor. Combined with the path generated from a planner, the monitor chooses appropriate waypoints that guide the learned controller to provide collision-free control trajectories. Our approach inherits the scalability advantages from DRL while enhancing safety guarantees. Our experimental evaluation demonstrates the effectiveness of our approach compared to DRL with augmented rewards and constrained DRL methods over a range of high-dimensional safety-sensitive navigation tasks.</description><identifier>DOI: 10.48550/arxiv.2203.01190</identifier><language>eng</language><subject>Computer Science - Learning ; Computer Science - Robotics</subject><creationdate>2022-03</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,777,882</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2203.01190$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2203.01190$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Xiong, Zikang</creatorcontrib><creatorcontrib>Eappen, Joe</creatorcontrib><creatorcontrib>Qureshi, Ahmed H</creatorcontrib><creatorcontrib>Jagannathan, Suresh</creatorcontrib><title>Model-free Neural Lyapunov Control for Safe Robot Navigation</title><description>Model-free Deep Reinforcement Learning (DRL) controllers have demonstrated promising results on various challenging non-linear control tasks. While a model-free DRL algorithm can solve unknown dynamics and high-dimensional problems, it lacks safety assurance. Although safety constraints can be encoded as part of a reward function, there still exists a large gap between an RL controller trained with this modified reward and a safe controller. In contrast, instead of implicitly encoding safety constraints with rewards, we explicitly co-learn a Twin Neural Lyapunov Function (TNLF) with the control policy in the DRL training loop and use the learned TNLF to build a runtime monitor. Combined with the path generated from a planner, the monitor chooses appropriate waypoints that guide the learned controller to provide collision-free control trajectories. Our approach inherits the scalability advantages from DRL while enhancing safety guarantees. Our experimental evaluation demonstrates the effectiveness of our approach compared to DRL with augmented rewards and constrained DRL methods over a range of high-dimensional safety-sensitive navigation tasks.</description><subject>Computer Science - Learning</subject><subject>Computer Science - Robotics</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj7FOwzAURb0woNIPYMI_kPCe7diOxIIiCkhpK0H36Lm2UaQQVyaN6N8DpdPZ7rmHsVuEUtmqgnvK3_1cCgGyBMQartnDOvkwFDGHwDfhmGng7YkOxzHNvEnjlNPAY8r8nWLgb8mliW9o7j9o6tN4w64iDV9heeGC7VZPu-alaLfPr81jW5A2ULgYrHGI8Ou1EEgrHYyuFFTeeqstKksmIqDcC1PvpauV9xqlExG0iFEu2N3_7Pl-d8j9J-VT95fRnTPkD0psQQU</recordid><startdate>20220302</startdate><enddate>20220302</enddate><creator>Xiong, Zikang</creator><creator>Eappen, Joe</creator><creator>Qureshi, Ahmed H</creator><creator>Jagannathan, Suresh</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20220302</creationdate><title>Model-free Neural Lyapunov Control for Safe Robot Navigation</title><author>Xiong, Zikang ; Eappen, Joe ; Qureshi, Ahmed H ; Jagannathan, Suresh</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a670-bfe87b11022080ea646e765405d8d868148a7f1013c279c3b94dd613b2f062ff3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Computer Science - Learning</topic><topic>Computer Science - Robotics</topic><toplevel>online_resources</toplevel><creatorcontrib>Xiong, Zikang</creatorcontrib><creatorcontrib>Eappen, Joe</creatorcontrib><creatorcontrib>Qureshi, Ahmed H</creatorcontrib><creatorcontrib>Jagannathan, Suresh</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Xiong, Zikang</au><au>Eappen, Joe</au><au>Qureshi, Ahmed H</au><au>Jagannathan, Suresh</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Model-free Neural Lyapunov Control for Safe Robot Navigation</atitle><date>2022-03-02</date><risdate>2022</risdate><abstract>Model-free Deep Reinforcement Learning (DRL) controllers have demonstrated promising results on various challenging non-linear control tasks. While a model-free DRL algorithm can solve unknown dynamics and high-dimensional problems, it lacks safety assurance. Although safety constraints can be encoded as part of a reward function, there still exists a large gap between an RL controller trained with this modified reward and a safe controller. In contrast, instead of implicitly encoding safety constraints with rewards, we explicitly co-learn a Twin Neural Lyapunov Function (TNLF) with the control policy in the DRL training loop and use the learned TNLF to build a runtime monitor. Combined with the path generated from a planner, the monitor chooses appropriate waypoints that guide the learned controller to provide collision-free control trajectories. Our approach inherits the scalability advantages from DRL while enhancing safety guarantees. Our experimental evaluation demonstrates the effectiveness of our approach compared to DRL with augmented rewards and constrained DRL methods over a range of high-dimensional safety-sensitive navigation tasks.</abstract><doi>10.48550/arxiv.2203.01190</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2203.01190
ispartof
issn
language eng
recordid cdi_arxiv_primary_2203_01190
source arXiv.org
subjects Computer Science - Learning
Computer Science - Robotics
title Model-free Neural Lyapunov Control for Safe Robot Navigation
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-19T21%3A15%3A41IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Model-free%20Neural%20Lyapunov%20Control%20for%20Safe%20Robot%20Navigation&rft.au=Xiong,%20Zikang&rft.date=2022-03-02&rft_id=info:doi/10.48550/arxiv.2203.01190&rft_dat=%3Carxiv_GOX%3E2203_01190%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true