Learning safety critics via a non-contractive binary bellman operator
The inability to naturally enforce safety in Reinforcement Learning (RL), with limited failures, is a core challenge impeding its use in real-world applications. One notion of safety of vast practical relevance is the ability to avoid (unsafe) regions of the state space. Though such a safety goal ca...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | The inability to naturally enforce safety in Reinforcement Learning (RL),
with limited failures, is a core challenge impeding its use in real-world
applications. One notion of safety of vast practical relevance is the ability
to avoid (unsafe) regions of the state space. Though such a safety goal can be
captured by an action-value-like function, a.k.a. safety critics, the
associated operator lacks the desired contraction and uniqueness properties
that the classical Bellman operator enjoys. In this work, we overcome the
non-contractiveness of safety critic operators by leveraging that safety is a
binary property. To that end, we study the properties of the binary safety
critic associated with a deterministic dynamical system that seeks to avoid
reaching an unsafe region. We formulate the corresponding binary Bellman
equation (B2E) for safety and study its properties. While the resulting
operator is still non-contractive, we fully characterize its fixed points
representing--except for a spurious solution--maximal persistently safe regions
of the state space that can always avoid failure. We provide an algorithm that,
by design, leverages axiomatic knowledge of safe data to avoid spurious fixed
points. |
---|---|
DOI: | 10.48550/arxiv.2401.12849 |