Constrained Policy Optimization via Bayesian World Models
Improving sample-efficiency and safety are crucial challenges when deploying reinforcement learning in high-stakes real world applications. We propose LAMBDA, a novel model-based approach for policy optimization in safety critical tasks modeled via constrained Markov decision processes. Our approach...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Improving sample-efficiency and safety are crucial challenges when deploying
reinforcement learning in high-stakes real world applications. We propose
LAMBDA, a novel model-based approach for policy optimization in safety critical
tasks modeled via constrained Markov decision processes. Our approach utilizes
Bayesian world models, and harnesses the resulting uncertainty to maximize
optimistic upper bounds on the task objective, as well as pessimistic upper
bounds on the safety constraints. We demonstrate LAMBDA's state of the art
performance on the Safety-Gym benchmark suite in terms of sample efficiency and
constraint violation. |
---|---|
DOI: | 10.48550/arxiv.2201.09802 |