CUP: A Conservative Update Policy Algorithm for Safe Reinforcement Learning

Safe reinforcement learning (RL) is still very challenging since it requires the agent to consider both return maximization and safe exploration. In this paper, we propose CUP, a Conservative Update Policy algorithm with a theoretical safety guarantee. We derive the CUP based on the new proposed per...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Yang, Long, Ji, Jiaming, Dai, Juntao, Zhang, Yu, Li, Pengfei, Pan, Gang
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Yang, Long
Ji, Jiaming
Dai, Juntao
Zhang, Yu
Li, Pengfei
Pan, Gang
description Safe reinforcement learning (RL) is still very challenging since it requires the agent to consider both return maximization and safe exploration. In this paper, we propose CUP, a Conservative Update Policy algorithm with a theoretical safety guarantee. We derive the CUP based on the new proposed performance bounds and surrogate functions. Although using bounds as surrogate functions to design safe RL algorithms have appeared in some existing works, we develop them at least three aspects: (i) We provide a rigorous theoretical analysis to extend the surrogate functions to generalized advantage estimator (GAE). GAE significantly reduces variance empirically while maintaining a tolerable level of bias, which is an efficient step for us to design CUP; (ii) The proposed bounds are tighter than existing works, i.e., using the proposed bounds as surrogate functions are better local approximations to the objective and safety constraints. (iii) The proposed CUP provides a non-convex implementation via first-order optimizers, which does not depend on any convex approximation. Finally, extensive experiments show the effectiveness of CUP where the agent satisfies safe constraints. We have opened the source code of CUP at https://github.com/RL-boxes/Safe-RL.
doi_str_mv 10.48550/arxiv.2202.07565
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2202_07565</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2202_07565</sourcerecordid><originalsourceid>FETCH-LOGICAL-a675-94cab3aaaa57cbdbd2fac85c3d77c5b0d59dbcd368312ebcbfc01649cd36e6083</originalsourceid><addsrcrecordid>eNotj8tOwzAURL1hgQofwAr_QIJjx3bCLop4iUhU0K6ja_u6WEqcyo0i-ve0hdmM5ixGOoTcFSwvKynZA6SfsOScM54zLZW8Ju_tdv1IG9pO8YBpgTksSLd7BzPS9TQEe6TNsJtSmL9H6qdEv8Aj_cQQT8PiiHGmHUKKIe5uyJWH4YC3_70im-enTfuadR8vb23TZaC0zOrSghFwitTWOOO4B1tJK5zWVhrmZO2MdUJVouBorPGWFaqszwgVq8SK3P_dXmz6fQojpGN_tuovVuIXiddIqg</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>CUP: A Conservative Update Policy Algorithm for Safe Reinforcement Learning</title><source>arXiv.org</source><creator>Yang, Long ; Ji, Jiaming ; Dai, Juntao ; Zhang, Yu ; Li, Pengfei ; Pan, Gang</creator><creatorcontrib>Yang, Long ; Ji, Jiaming ; Dai, Juntao ; Zhang, Yu ; Li, Pengfei ; Pan, Gang</creatorcontrib><description>Safe reinforcement learning (RL) is still very challenging since it requires the agent to consider both return maximization and safe exploration. In this paper, we propose CUP, a Conservative Update Policy algorithm with a theoretical safety guarantee. We derive the CUP based on the new proposed performance bounds and surrogate functions. Although using bounds as surrogate functions to design safe RL algorithms have appeared in some existing works, we develop them at least three aspects: (i) We provide a rigorous theoretical analysis to extend the surrogate functions to generalized advantage estimator (GAE). GAE significantly reduces variance empirically while maintaining a tolerable level of bias, which is an efficient step for us to design CUP; (ii) The proposed bounds are tighter than existing works, i.e., using the proposed bounds as surrogate functions are better local approximations to the objective and safety constraints. (iii) The proposed CUP provides a non-convex implementation via first-order optimizers, which does not depend on any convex approximation. Finally, extensive experiments show the effectiveness of CUP where the agent satisfies safe constraints. We have opened the source code of CUP at https://github.com/RL-boxes/Safe-RL.</description><identifier>DOI: 10.48550/arxiv.2202.07565</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Learning</subject><creationdate>2022-02</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,781,886</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2202.07565$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2202.07565$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Yang, Long</creatorcontrib><creatorcontrib>Ji, Jiaming</creatorcontrib><creatorcontrib>Dai, Juntao</creatorcontrib><creatorcontrib>Zhang, Yu</creatorcontrib><creatorcontrib>Li, Pengfei</creatorcontrib><creatorcontrib>Pan, Gang</creatorcontrib><title>CUP: A Conservative Update Policy Algorithm for Safe Reinforcement Learning</title><description>Safe reinforcement learning (RL) is still very challenging since it requires the agent to consider both return maximization and safe exploration. In this paper, we propose CUP, a Conservative Update Policy algorithm with a theoretical safety guarantee. We derive the CUP based on the new proposed performance bounds and surrogate functions. Although using bounds as surrogate functions to design safe RL algorithms have appeared in some existing works, we develop them at least three aspects: (i) We provide a rigorous theoretical analysis to extend the surrogate functions to generalized advantage estimator (GAE). GAE significantly reduces variance empirically while maintaining a tolerable level of bias, which is an efficient step for us to design CUP; (ii) The proposed bounds are tighter than existing works, i.e., using the proposed bounds as surrogate functions are better local approximations to the objective and safety constraints. (iii) The proposed CUP provides a non-convex implementation via first-order optimizers, which does not depend on any convex approximation. Finally, extensive experiments show the effectiveness of CUP where the agent satisfies safe constraints. We have opened the source code of CUP at https://github.com/RL-boxes/Safe-RL.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj8tOwzAURL1hgQofwAr_QIJjx3bCLop4iUhU0K6ja_u6WEqcyo0i-ve0hdmM5ixGOoTcFSwvKynZA6SfsOScM54zLZW8Ju_tdv1IG9pO8YBpgTksSLd7BzPS9TQEe6TNsJtSmL9H6qdEv8Aj_cQQT8PiiHGmHUKKIe5uyJWH4YC3_70im-enTfuadR8vb23TZaC0zOrSghFwitTWOOO4B1tJK5zWVhrmZO2MdUJVouBorPGWFaqszwgVq8SK3P_dXmz6fQojpGN_tuovVuIXiddIqg</recordid><startdate>20220215</startdate><enddate>20220215</enddate><creator>Yang, Long</creator><creator>Ji, Jiaming</creator><creator>Dai, Juntao</creator><creator>Zhang, Yu</creator><creator>Li, Pengfei</creator><creator>Pan, Gang</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20220215</creationdate><title>CUP: A Conservative Update Policy Algorithm for Safe Reinforcement Learning</title><author>Yang, Long ; Ji, Jiaming ; Dai, Juntao ; Zhang, Yu ; Li, Pengfei ; Pan, Gang</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a675-94cab3aaaa57cbdbd2fac85c3d77c5b0d59dbcd368312ebcbfc01649cd36e6083</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Yang, Long</creatorcontrib><creatorcontrib>Ji, Jiaming</creatorcontrib><creatorcontrib>Dai, Juntao</creatorcontrib><creatorcontrib>Zhang, Yu</creatorcontrib><creatorcontrib>Li, Pengfei</creatorcontrib><creatorcontrib>Pan, Gang</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Yang, Long</au><au>Ji, Jiaming</au><au>Dai, Juntao</au><au>Zhang, Yu</au><au>Li, Pengfei</au><au>Pan, Gang</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>CUP: A Conservative Update Policy Algorithm for Safe Reinforcement Learning</atitle><date>2022-02-15</date><risdate>2022</risdate><abstract>Safe reinforcement learning (RL) is still very challenging since it requires the agent to consider both return maximization and safe exploration. In this paper, we propose CUP, a Conservative Update Policy algorithm with a theoretical safety guarantee. We derive the CUP based on the new proposed performance bounds and surrogate functions. Although using bounds as surrogate functions to design safe RL algorithms have appeared in some existing works, we develop them at least three aspects: (i) We provide a rigorous theoretical analysis to extend the surrogate functions to generalized advantage estimator (GAE). GAE significantly reduces variance empirically while maintaining a tolerable level of bias, which is an efficient step for us to design CUP; (ii) The proposed bounds are tighter than existing works, i.e., using the proposed bounds as surrogate functions are better local approximations to the objective and safety constraints. (iii) The proposed CUP provides a non-convex implementation via first-order optimizers, which does not depend on any convex approximation. Finally, extensive experiments show the effectiveness of CUP where the agent satisfies safe constraints. We have opened the source code of CUP at https://github.com/RL-boxes/Safe-RL.</abstract><doi>10.48550/arxiv.2202.07565</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2202.07565
ispartof
issn
language eng
recordid cdi_arxiv_primary_2202_07565
source arXiv.org
subjects Computer Science - Artificial Intelligence
Computer Science - Learning
title CUP: A Conservative Update Policy Algorithm for Safe Reinforcement Learning
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-18T09%3A53%3A10IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=CUP:%20A%20Conservative%20Update%20Policy%20Algorithm%20for%20Safe%20Reinforcement%20Learning&rft.au=Yang,%20Long&rft.date=2022-02-15&rft_id=info:doi/10.48550/arxiv.2202.07565&rft_dat=%3Carxiv_GOX%3E2202_07565%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true