Recovery RL: Safe Reinforcement Learning With Learned Recovery Zones

Safety remains a central obstacle preventing widespread use of RL in the real world: learning new tasks in uncertain environments requires extensive exploration, but safety requires limiting exploration. We propose Recovery RL, an algorithm which navigates this tradeoff by (1) leveraging offline dat...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE robotics and automation letters 2021-07, Vol.6 (3), p.4915-4922
Hauptverfasser: Thananjeyan, Brijen, Balakrishna, Ashwin, Nair, Suraj, Luo, Michael, Srinivasan, Krishnan, Hwang, Minho, Gonzalez, Joseph E., Ibarz, Julian, Finn, Chelsea, Goldberg, Ken
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 4922
container_issue 3
container_start_page 4915
container_title IEEE robotics and automation letters
container_volume 6
creator Thananjeyan, Brijen
Balakrishna, Ashwin
Nair, Suraj
Luo, Michael
Srinivasan, Krishnan
Hwang, Minho
Gonzalez, Joseph E.
Ibarz, Julian
Finn, Chelsea
Goldberg, Ken
description Safety remains a central obstacle preventing widespread use of RL in the real world: learning new tasks in uncertain environments requires extensive exploration, but safety requires limiting exploration. We propose Recovery RL, an algorithm which navigates this tradeoff by (1) leveraging offline data to learn about constraint violating zones before policy learning and (2) separating the goals of improving task performance and constraint satisfaction across two policies: a task policy that only optimizes the task reward and a recovery policy that guides the agent to safety when constraint violation is likely. We evaluate Recovery RL on 6 simulation domains, including two contact-rich manipulation tasks and an image-based navigation task, and an image-based obstacle avoidance task on a physical robot. We compare Recovery RL to 5 prior safe RL methods which jointly optimize for task performance and safety via constrained optimization or reward shaping and find that Recovery RL outperforms the next best prior method across all domains. Results suggest that Recovery RL trades off constraint violations and task successes 2-20 times more efficiently in simulation domains and 3 times more efficiently in physical experiments. See https://tinyurl.com/rl-recovery for videos and supplementary material.
doi_str_mv 10.1109/LRA.2021.3070252
format Article
fullrecord <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_crossref_primary_10_1109_LRA_2021_3070252</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9392290</ieee_id><sourcerecordid>2517033741</sourcerecordid><originalsourceid>FETCH-LOGICAL-c291t-1a746b351b43b3334ed45aefd38d5b4557dbdb37e4034c65c4832e752a4644823</originalsourceid><addsrcrecordid>eNpNkEFLw0AQRhdRsNTeBS8Bz4m7M7vZxFupVoWAEBXBy7JJJppik7qbCv33pqQUT_MNvG8GHmOXgkdC8PQmy-cRcBARcs1BwQmbAGodoo7j03_5nM28X3HOhQKNqZqwu5zK7pfcLsiz2-DF1hTk1LR150paU9sHGVnXNu1n8N70X-NGVXBsfXQt-Qt2VttvT7PDnLK35f3r4jHMnh-eFvMsLCEVfSislnGBShQSC0SUVEllqa4wqVQhldJVURWoSXKUZaxKmSCQVmBlLGUCOGXX492N63625Huz6rauHV4aUEJzRC3FQPGRKl3nvaPabFyztm5nBDd7XWbQZfa6zEHXULkaKw0RHfEUU4CU4x9nnWOu</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2517033741</pqid></control><display><type>article</type><title>Recovery RL: Safe Reinforcement Learning With Learned Recovery Zones</title><source>IEEE Electronic Library (IEL)</source><creator>Thananjeyan, Brijen ; Balakrishna, Ashwin ; Nair, Suraj ; Luo, Michael ; Srinivasan, Krishnan ; Hwang, Minho ; Gonzalez, Joseph E. ; Ibarz, Julian ; Finn, Chelsea ; Goldberg, Ken</creator><creatorcontrib>Thananjeyan, Brijen ; Balakrishna, Ashwin ; Nair, Suraj ; Luo, Michael ; Srinivasan, Krishnan ; Hwang, Minho ; Gonzalez, Joseph E. ; Ibarz, Julian ; Finn, Chelsea ; Goldberg, Ken</creatorcontrib><description>Safety remains a central obstacle preventing widespread use of RL in the real world: learning new tasks in uncertain environments requires extensive exploration, but safety requires limiting exploration. We propose Recovery RL, an algorithm which navigates this tradeoff by (1) leveraging offline data to learn about constraint violating zones before policy learning and (2) separating the goals of improving task performance and constraint satisfaction across two policies: a task policy that only optimizes the task reward and a recovery policy that guides the agent to safety when constraint violation is likely. We evaluate Recovery RL on 6 simulation domains, including two contact-rich manipulation tasks and an image-based navigation task, and an image-based obstacle avoidance task on a physical robot. We compare Recovery RL to 5 prior safe RL methods which jointly optimize for task performance and safety via constrained optimization or reward shaping and find that Recovery RL outperforms the next best prior method across all domains. Results suggest that Recovery RL trades off constraint violations and task successes 2-20 times more efficiently in simulation domains and 3 times more efficiently in physical experiments. See https://tinyurl.com/rl-recovery for videos and supplementary material.</description><identifier>ISSN: 2377-3766</identifier><identifier>EISSN: 2377-3766</identifier><identifier>DOI: 10.1109/LRA.2021.3070252</identifier><identifier>CODEN: IRALC6</identifier><language>eng</language><publisher>Piscataway: IEEE</publisher><subject>Algorithms ; Constraints ; Domains ; Image manipulation ; Learning ; Navigation ; Obstacle avoidance ; Optimization ; Recovery ; Recovery zones ; Reinforcement learning ; Safety</subject><ispartof>IEEE robotics and automation letters, 2021-07, Vol.6 (3), p.4915-4922</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2021</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c291t-1a746b351b43b3334ed45aefd38d5b4557dbdb37e4034c65c4832e752a4644823</citedby><cites>FETCH-LOGICAL-c291t-1a746b351b43b3334ed45aefd38d5b4557dbdb37e4034c65c4832e752a4644823</cites><orcidid>0000-0002-3999-2436 ; 0000-0001-6747-9499 ; 0000-0002-9190-7876 ; 0000-0003-1841-5071 ; 0000-0002-0692-1332 ; 0000-0002-3508-7850 ; 0000-0001-6298-0874</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9392290$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,776,780,792,27901,27902,54733</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/9392290$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Thananjeyan, Brijen</creatorcontrib><creatorcontrib>Balakrishna, Ashwin</creatorcontrib><creatorcontrib>Nair, Suraj</creatorcontrib><creatorcontrib>Luo, Michael</creatorcontrib><creatorcontrib>Srinivasan, Krishnan</creatorcontrib><creatorcontrib>Hwang, Minho</creatorcontrib><creatorcontrib>Gonzalez, Joseph E.</creatorcontrib><creatorcontrib>Ibarz, Julian</creatorcontrib><creatorcontrib>Finn, Chelsea</creatorcontrib><creatorcontrib>Goldberg, Ken</creatorcontrib><title>Recovery RL: Safe Reinforcement Learning With Learned Recovery Zones</title><title>IEEE robotics and automation letters</title><addtitle>LRA</addtitle><description>Safety remains a central obstacle preventing widespread use of RL in the real world: learning new tasks in uncertain environments requires extensive exploration, but safety requires limiting exploration. We propose Recovery RL, an algorithm which navigates this tradeoff by (1) leveraging offline data to learn about constraint violating zones before policy learning and (2) separating the goals of improving task performance and constraint satisfaction across two policies: a task policy that only optimizes the task reward and a recovery policy that guides the agent to safety when constraint violation is likely. We evaluate Recovery RL on 6 simulation domains, including two contact-rich manipulation tasks and an image-based navigation task, and an image-based obstacle avoidance task on a physical robot. We compare Recovery RL to 5 prior safe RL methods which jointly optimize for task performance and safety via constrained optimization or reward shaping and find that Recovery RL outperforms the next best prior method across all domains. Results suggest that Recovery RL trades off constraint violations and task successes 2-20 times more efficiently in simulation domains and 3 times more efficiently in physical experiments. See https://tinyurl.com/rl-recovery for videos and supplementary material.</description><subject>Algorithms</subject><subject>Constraints</subject><subject>Domains</subject><subject>Image manipulation</subject><subject>Learning</subject><subject>Navigation</subject><subject>Obstacle avoidance</subject><subject>Optimization</subject><subject>Recovery</subject><subject>Recovery zones</subject><subject>Reinforcement learning</subject><subject>Safety</subject><issn>2377-3766</issn><issn>2377-3766</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNpNkEFLw0AQRhdRsNTeBS8Bz4m7M7vZxFupVoWAEBXBy7JJJppik7qbCv33pqQUT_MNvG8GHmOXgkdC8PQmy-cRcBARcs1BwQmbAGodoo7j03_5nM28X3HOhQKNqZqwu5zK7pfcLsiz2-DF1hTk1LR150paU9sHGVnXNu1n8N70X-NGVXBsfXQt-Qt2VttvT7PDnLK35f3r4jHMnh-eFvMsLCEVfSislnGBShQSC0SUVEllqa4wqVQhldJVURWoSXKUZaxKmSCQVmBlLGUCOGXX492N63625Huz6rauHV4aUEJzRC3FQPGRKl3nvaPabFyztm5nBDd7XWbQZfa6zEHXULkaKw0RHfEUU4CU4x9nnWOu</recordid><startdate>20210701</startdate><enddate>20210701</enddate><creator>Thananjeyan, Brijen</creator><creator>Balakrishna, Ashwin</creator><creator>Nair, Suraj</creator><creator>Luo, Michael</creator><creator>Srinivasan, Krishnan</creator><creator>Hwang, Minho</creator><creator>Gonzalez, Joseph E.</creator><creator>Ibarz, Julian</creator><creator>Finn, Chelsea</creator><creator>Goldberg, Ken</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><orcidid>https://orcid.org/0000-0002-3999-2436</orcidid><orcidid>https://orcid.org/0000-0001-6747-9499</orcidid><orcidid>https://orcid.org/0000-0002-9190-7876</orcidid><orcidid>https://orcid.org/0000-0003-1841-5071</orcidid><orcidid>https://orcid.org/0000-0002-0692-1332</orcidid><orcidid>https://orcid.org/0000-0002-3508-7850</orcidid><orcidid>https://orcid.org/0000-0001-6298-0874</orcidid></search><sort><creationdate>20210701</creationdate><title>Recovery RL: Safe Reinforcement Learning With Learned Recovery Zones</title><author>Thananjeyan, Brijen ; Balakrishna, Ashwin ; Nair, Suraj ; Luo, Michael ; Srinivasan, Krishnan ; Hwang, Minho ; Gonzalez, Joseph E. ; Ibarz, Julian ; Finn, Chelsea ; Goldberg, Ken</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c291t-1a746b351b43b3334ed45aefd38d5b4557dbdb37e4034c65c4832e752a4644823</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Algorithms</topic><topic>Constraints</topic><topic>Domains</topic><topic>Image manipulation</topic><topic>Learning</topic><topic>Navigation</topic><topic>Obstacle avoidance</topic><topic>Optimization</topic><topic>Recovery</topic><topic>Recovery zones</topic><topic>Reinforcement learning</topic><topic>Safety</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Thananjeyan, Brijen</creatorcontrib><creatorcontrib>Balakrishna, Ashwin</creatorcontrib><creatorcontrib>Nair, Suraj</creatorcontrib><creatorcontrib>Luo, Michael</creatorcontrib><creatorcontrib>Srinivasan, Krishnan</creatorcontrib><creatorcontrib>Hwang, Minho</creatorcontrib><creatorcontrib>Gonzalez, Joseph E.</creatorcontrib><creatorcontrib>Ibarz, Julian</creatorcontrib><creatorcontrib>Finn, Chelsea</creatorcontrib><creatorcontrib>Goldberg, Ken</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>IEEE robotics and automation letters</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Thananjeyan, Brijen</au><au>Balakrishna, Ashwin</au><au>Nair, Suraj</au><au>Luo, Michael</au><au>Srinivasan, Krishnan</au><au>Hwang, Minho</au><au>Gonzalez, Joseph E.</au><au>Ibarz, Julian</au><au>Finn, Chelsea</au><au>Goldberg, Ken</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Recovery RL: Safe Reinforcement Learning With Learned Recovery Zones</atitle><jtitle>IEEE robotics and automation letters</jtitle><stitle>LRA</stitle><date>2021-07-01</date><risdate>2021</risdate><volume>6</volume><issue>3</issue><spage>4915</spage><epage>4922</epage><pages>4915-4922</pages><issn>2377-3766</issn><eissn>2377-3766</eissn><coden>IRALC6</coden><abstract>Safety remains a central obstacle preventing widespread use of RL in the real world: learning new tasks in uncertain environments requires extensive exploration, but safety requires limiting exploration. We propose Recovery RL, an algorithm which navigates this tradeoff by (1) leveraging offline data to learn about constraint violating zones before policy learning and (2) separating the goals of improving task performance and constraint satisfaction across two policies: a task policy that only optimizes the task reward and a recovery policy that guides the agent to safety when constraint violation is likely. We evaluate Recovery RL on 6 simulation domains, including two contact-rich manipulation tasks and an image-based navigation task, and an image-based obstacle avoidance task on a physical robot. We compare Recovery RL to 5 prior safe RL methods which jointly optimize for task performance and safety via constrained optimization or reward shaping and find that Recovery RL outperforms the next best prior method across all domains. Results suggest that Recovery RL trades off constraint violations and task successes 2-20 times more efficiently in simulation domains and 3 times more efficiently in physical experiments. See https://tinyurl.com/rl-recovery for videos and supplementary material.</abstract><cop>Piscataway</cop><pub>IEEE</pub><doi>10.1109/LRA.2021.3070252</doi><tpages>8</tpages><orcidid>https://orcid.org/0000-0002-3999-2436</orcidid><orcidid>https://orcid.org/0000-0001-6747-9499</orcidid><orcidid>https://orcid.org/0000-0002-9190-7876</orcidid><orcidid>https://orcid.org/0000-0003-1841-5071</orcidid><orcidid>https://orcid.org/0000-0002-0692-1332</orcidid><orcidid>https://orcid.org/0000-0002-3508-7850</orcidid><orcidid>https://orcid.org/0000-0001-6298-0874</orcidid></addata></record>
fulltext fulltext_linktorsrc
identifier ISSN: 2377-3766
ispartof IEEE robotics and automation letters, 2021-07, Vol.6 (3), p.4915-4922
issn 2377-3766
2377-3766
language eng
recordid cdi_crossref_primary_10_1109_LRA_2021_3070252
source IEEE Electronic Library (IEL)
subjects Algorithms
Constraints
Domains
Image manipulation
Learning
Navigation
Obstacle avoidance
Optimization
Recovery
Recovery zones
Reinforcement learning
Safety
title Recovery RL: Safe Reinforcement Learning With Learned Recovery Zones
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-09T09%3A19%3A31IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Recovery%20RL:%20Safe%20Reinforcement%20Learning%20With%20Learned%20Recovery%20Zones&rft.jtitle=IEEE%20robotics%20and%20automation%20letters&rft.au=Thananjeyan,%20Brijen&rft.date=2021-07-01&rft.volume=6&rft.issue=3&rft.spage=4915&rft.epage=4922&rft.pages=4915-4922&rft.issn=2377-3766&rft.eissn=2377-3766&rft.coden=IRALC6&rft_id=info:doi/10.1109/LRA.2021.3070252&rft_dat=%3Cproquest_RIE%3E2517033741%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2517033741&rft_id=info:pmid/&rft_ieee_id=9392290&rfr_iscdi=true