Learning-based Model Predictive Control for Safe Exploration and Reinforcement Learning
Reinforcement learning has been successfully used to solve difficult tasks in complex unknown environments. However, these methods typically do not provide any safety guarantees during the learning process. This is particularly problematic, since reinforcement learning agent actively explore their e...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Reinforcement learning has been successfully used to solve difficult tasks in
complex unknown environments. However, these methods typically do not provide
any safety guarantees during the learning process. This is particularly
problematic, since reinforcement learning agent actively explore their
environment. This prevents their use in safety-critical, real-world
applications. In this paper, we present a learning-based model predictive
control scheme that provides high-probability safety guarantees throughout the
learning process. Based on a reliable statistical model, we construct provably
accurate confidence intervals on predicted trajectories. Unlike previous
approaches, we allow for input-dependent uncertainties. Based on these reliable
predictions, we guarantee that trajectories satisfy safety constraints.
Moreover, we use a terminal set constraint to recursively guarantee the
existence of safe control actions at every iteration. We evaluate the resulting
algorithm to safely explore the dynamics of an inverted pendulum and to solve a
reinforcement learning task on a cart-pole system with safety constraints. |
---|---|
DOI: | 10.48550/arxiv.1906.12189 |