Constraint-Conditioned Policy Optimization for Versatile Safe Reinforcement Learning

Safe reinforcement learning (RL) focuses on training reward-maximizing agents subject to pre-defined safety constraints. Yet, learning versatile safe policies that can adapt to varying safety constraint requirements during deployment without retraining remains a largely unexplored and challenging ar...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Yao, Yihang, Liu, Zuxin, Cen, Zhepeng, Zhu, Jiacheng, Yu, Wenhao, Zhang, Tingnan, Zhao, Ding
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Yao, Yihang
Liu, Zuxin
Cen, Zhepeng
Zhu, Jiacheng
Yu, Wenhao
Zhang, Tingnan
Zhao, Ding
description Safe reinforcement learning (RL) focuses on training reward-maximizing agents subject to pre-defined safety constraints. Yet, learning versatile safe policies that can adapt to varying safety constraint requirements during deployment without retraining remains a largely unexplored and challenging area. In this work, we formulate the versatile safe RL problem and consider two primary requirements: training efficiency and zero-shot adaptation capability. To address them, we introduce the Conditioned Constrained Policy Optimization (CCPO) framework, consisting of two key modules: (1) Versatile Value Estimation (VVE) for approximating value functions under unseen threshold conditions, and (2) Conditioned Variational Inference (CVI) for encoding arbitrary constraint thresholds during policy optimization. Our extensive experiments demonstrate that CCPO outperforms the baselines in terms of safety and task performance while preserving zero-shot adaptation capabilities to different constraint thresholds data-efficiently. This makes our approach suitable for real-world dynamic applications.
doi_str_mv 10.48550/arxiv.2310.03718
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2310_03718</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2310_03718</sourcerecordid><originalsourceid>FETCH-LOGICAL-a678-91ee10096c1c5802c1f424ffc32d0637ea07f817a5d6e2bbb5fa285f042b3aae3</originalsourceid><addsrcrecordid>eNotj81KxDAUhbNxIaMP4Mq8QMf8NE1mKcU_KIxocVtu0xu50KZDGsTx6e2Mrs453-LAx9iNFNvSGSPuIH3T11bpFQhtpbtkbT3HJSegmIu1DpRpjjjw13kkf-T7Q6aJfuBEeZgT_8C0rGtE_g4B-RtSXLHHCWPmDUKKFD-v2EWAccHr_9yw9vGhrZ-LZv_0Ut83BVTWFTuJKIXYVV5644TyMpSqDMFrNYhKWwRhg5MWzFCh6vveBFDOBFGqXgOg3rDbv9uzVndINEE6die97qynfwFHbUwn</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Constraint-Conditioned Policy Optimization for Versatile Safe Reinforcement Learning</title><source>arXiv.org</source><creator>Yao, Yihang ; Liu, Zuxin ; Cen, Zhepeng ; Zhu, Jiacheng ; Yu, Wenhao ; Zhang, Tingnan ; Zhao, Ding</creator><creatorcontrib>Yao, Yihang ; Liu, Zuxin ; Cen, Zhepeng ; Zhu, Jiacheng ; Yu, Wenhao ; Zhang, Tingnan ; Zhao, Ding</creatorcontrib><description>Safe reinforcement learning (RL) focuses on training reward-maximizing agents subject to pre-defined safety constraints. Yet, learning versatile safe policies that can adapt to varying safety constraint requirements during deployment without retraining remains a largely unexplored and challenging area. In this work, we formulate the versatile safe RL problem and consider two primary requirements: training efficiency and zero-shot adaptation capability. To address them, we introduce the Conditioned Constrained Policy Optimization (CCPO) framework, consisting of two key modules: (1) Versatile Value Estimation (VVE) for approximating value functions under unseen threshold conditions, and (2) Conditioned Variational Inference (CVI) for encoding arbitrary constraint thresholds during policy optimization. Our extensive experiments demonstrate that CCPO outperforms the baselines in terms of safety and task performance while preserving zero-shot adaptation capabilities to different constraint thresholds data-efficiently. This makes our approach suitable for real-world dynamic applications.</description><identifier>DOI: 10.48550/arxiv.2310.03718</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Learning</subject><creationdate>2023-10</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2310.03718$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2310.03718$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Yao, Yihang</creatorcontrib><creatorcontrib>Liu, Zuxin</creatorcontrib><creatorcontrib>Cen, Zhepeng</creatorcontrib><creatorcontrib>Zhu, Jiacheng</creatorcontrib><creatorcontrib>Yu, Wenhao</creatorcontrib><creatorcontrib>Zhang, Tingnan</creatorcontrib><creatorcontrib>Zhao, Ding</creatorcontrib><title>Constraint-Conditioned Policy Optimization for Versatile Safe Reinforcement Learning</title><description>Safe reinforcement learning (RL) focuses on training reward-maximizing agents subject to pre-defined safety constraints. Yet, learning versatile safe policies that can adapt to varying safety constraint requirements during deployment without retraining remains a largely unexplored and challenging area. In this work, we formulate the versatile safe RL problem and consider two primary requirements: training efficiency and zero-shot adaptation capability. To address them, we introduce the Conditioned Constrained Policy Optimization (CCPO) framework, consisting of two key modules: (1) Versatile Value Estimation (VVE) for approximating value functions under unseen threshold conditions, and (2) Conditioned Variational Inference (CVI) for encoding arbitrary constraint thresholds during policy optimization. Our extensive experiments demonstrate that CCPO outperforms the baselines in terms of safety and task performance while preserving zero-shot adaptation capabilities to different constraint thresholds data-efficiently. This makes our approach suitable for real-world dynamic applications.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj81KxDAUhbNxIaMP4Mq8QMf8NE1mKcU_KIxocVtu0xu50KZDGsTx6e2Mrs453-LAx9iNFNvSGSPuIH3T11bpFQhtpbtkbT3HJSegmIu1DpRpjjjw13kkf-T7Q6aJfuBEeZgT_8C0rGtE_g4B-RtSXLHHCWPmDUKKFD-v2EWAccHr_9yw9vGhrZ-LZv_0Ut83BVTWFTuJKIXYVV5644TyMpSqDMFrNYhKWwRhg5MWzFCh6vveBFDOBFGqXgOg3rDbv9uzVndINEE6die97qynfwFHbUwn</recordid><startdate>20231005</startdate><enddate>20231005</enddate><creator>Yao, Yihang</creator><creator>Liu, Zuxin</creator><creator>Cen, Zhepeng</creator><creator>Zhu, Jiacheng</creator><creator>Yu, Wenhao</creator><creator>Zhang, Tingnan</creator><creator>Zhao, Ding</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20231005</creationdate><title>Constraint-Conditioned Policy Optimization for Versatile Safe Reinforcement Learning</title><author>Yao, Yihang ; Liu, Zuxin ; Cen, Zhepeng ; Zhu, Jiacheng ; Yu, Wenhao ; Zhang, Tingnan ; Zhao, Ding</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a678-91ee10096c1c5802c1f424ffc32d0637ea07f817a5d6e2bbb5fa285f042b3aae3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Yao, Yihang</creatorcontrib><creatorcontrib>Liu, Zuxin</creatorcontrib><creatorcontrib>Cen, Zhepeng</creatorcontrib><creatorcontrib>Zhu, Jiacheng</creatorcontrib><creatorcontrib>Yu, Wenhao</creatorcontrib><creatorcontrib>Zhang, Tingnan</creatorcontrib><creatorcontrib>Zhao, Ding</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Yao, Yihang</au><au>Liu, Zuxin</au><au>Cen, Zhepeng</au><au>Zhu, Jiacheng</au><au>Yu, Wenhao</au><au>Zhang, Tingnan</au><au>Zhao, Ding</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Constraint-Conditioned Policy Optimization for Versatile Safe Reinforcement Learning</atitle><date>2023-10-05</date><risdate>2023</risdate><abstract>Safe reinforcement learning (RL) focuses on training reward-maximizing agents subject to pre-defined safety constraints. Yet, learning versatile safe policies that can adapt to varying safety constraint requirements during deployment without retraining remains a largely unexplored and challenging area. In this work, we formulate the versatile safe RL problem and consider two primary requirements: training efficiency and zero-shot adaptation capability. To address them, we introduce the Conditioned Constrained Policy Optimization (CCPO) framework, consisting of two key modules: (1) Versatile Value Estimation (VVE) for approximating value functions under unseen threshold conditions, and (2) Conditioned Variational Inference (CVI) for encoding arbitrary constraint thresholds during policy optimization. Our extensive experiments demonstrate that CCPO outperforms the baselines in terms of safety and task performance while preserving zero-shot adaptation capabilities to different constraint thresholds data-efficiently. This makes our approach suitable for real-world dynamic applications.</abstract><doi>10.48550/arxiv.2310.03718</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2310.03718
ispartof
issn
language eng
recordid cdi_arxiv_primary_2310_03718
source arXiv.org
subjects Computer Science - Artificial Intelligence
Computer Science - Learning
title Constraint-Conditioned Policy Optimization for Versatile Safe Reinforcement Learning
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-01T10%3A07%3A38IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Constraint-Conditioned%20Policy%20Optimization%20for%20Versatile%20Safe%20Reinforcement%20Learning&rft.au=Yao,%20Yihang&rft.date=2023-10-05&rft_id=info:doi/10.48550/arxiv.2310.03718&rft_dat=%3Carxiv_GOX%3E2310_03718%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true