Log Barriers for Safe Black-box Optimization with Application to Safe Reinforcement Learning
Optimizing noisy functions online, when evaluating the objective requires experiments on a deployed system, is a crucial task arising in manufacturing, robotics and many others. Often, constraints on safe inputs are unknown ahead of time, and we only obtain noisy information, indicating how close we...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Optimizing noisy functions online, when evaluating the objective requires
experiments on a deployed system, is a crucial task arising in manufacturing,
robotics and many others. Often, constraints on safe inputs are unknown ahead
of time, and we only obtain noisy information, indicating how close we are to
violating the constraints. Yet, safety must be guaranteed at all times, not
only for the final output of the algorithm.
We introduce a general approach for seeking a stationary point in high
dimensional non-linear stochastic optimization problems in which maintaining
safety during learning is crucial. Our approach called LB-SGD is based on
applying stochastic gradient descent (SGD) with a carefully chosen adaptive
step size to a logarithmic barrier approximation of the original problem. We
provide a complete convergence analysis of non-convex, convex, and
strongly-convex smooth constrained problems, with first-order and zeroth-order
feedback. Our approach yields efficient updates and scales better with
dimensionality compared to existing approaches.
We empirically compare the sample complexity and the computational cost of
our method with existing safe learning approaches. Beyond synthetic benchmarks,
we demonstrate the effectiveness of our approach on minimizing constraint
violation in policy search tasks in safe reinforcement learning (RL). |
---|---|
DOI: | 10.48550/arxiv.2207.10415 |