Constraint-Conditioned Policy Optimization for Versatile Safe Reinforcement Learning
Safe reinforcement learning (RL) focuses on training reward-maximizing agents subject to pre-defined safety constraints. Yet, learning versatile safe policies that can adapt to varying safety constraint requirements during deployment without retraining remains a largely unexplored and challenging ar...
Gespeichert in:
Hauptverfasser: | , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Safe reinforcement learning (RL) focuses on training reward-maximizing agents
subject to pre-defined safety constraints. Yet, learning versatile safe
policies that can adapt to varying safety constraint requirements during
deployment without retraining remains a largely unexplored and challenging
area. In this work, we formulate the versatile safe RL problem and consider two
primary requirements: training efficiency and zero-shot adaptation capability.
To address them, we introduce the Conditioned Constrained Policy Optimization
(CCPO) framework, consisting of two key modules: (1) Versatile Value Estimation
(VVE) for approximating value functions under unseen threshold conditions, and
(2) Conditioned Variational Inference (CVI) for encoding arbitrary constraint
thresholds during policy optimization. Our extensive experiments demonstrate
that CCPO outperforms the baselines in terms of safety and task performance
while preserving zero-shot adaptation capabilities to different constraint
thresholds data-efficiently. This makes our approach suitable for real-world
dynamic applications. |
---|---|
DOI: | 10.48550/arxiv.2310.03718 |