Safety-guaranteed Reinforcement Learning based on Multi-class Support Vector Machine
Several works have addressed the problem of incorporating constraints in the reinforcement learning (RL) framework, however majority of them can only guarantee the satisfaction of soft constraints. In this work, we address the problem of satisfying hard state constraints in a model-free RL setting w...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Several works have addressed the problem of incorporating constraints in the
reinforcement learning (RL) framework, however majority of them can only
guarantee the satisfaction of soft constraints. In this work, we address the
problem of satisfying hard state constraints in a model-free RL setting with
the deterministic system dynamics. The proposed algorithm is developed for the
discrete state and action space and utilizes a multi-class support vector
machine (SVM) to represent the policy. The state constraints are incorporated
in the SVM optimization framework to derive an analytical solution for
determining the policy parameters. This final policy converges to a solution
which is guaranteed to satisfy the constraints. Additionally, the proposed
formulation adheres to the Q-learning framework and thus, also guarantees
convergence to the optimal solution. The algorithm is demonstrated with
multiple example problems. |
---|---|
DOI: | 10.48550/arxiv.2006.07446 |