Learning Deep Energy Shaping Policies for Stability-Guaranteed Manipulation
Deep reinforcement learning (DRL) has been successfully used to solve various robotic manipulation tasks. However, most of the existing works do not address the issue of control stability. This is in sharp contrast to the control theory community where the well-established norm is to prove stability...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Deep reinforcement learning (DRL) has been successfully used to solve various
robotic manipulation tasks. However, most of the existing works do not address
the issue of control stability. This is in sharp contrast to the control theory
community where the well-established norm is to prove stability whenever a
control law is synthesized. What makes traditional stability analysis difficult
for DRL are the uninterpretable nature of the neural network policies and
unknown system dynamics. In this work, stability is obtained by deriving an
interpretable deep policy structure based on the $\textit{energy shaping}$
control of Lagrangian systems. Then, stability during physical interaction with
an unknown environment is established based on $\textit{passivity}$. The result
is a stability guaranteeing DRL in a model-free framework that is general
enough for contact-rich manipulation tasks. With an experiment on a peg-in-hole
task, we demonstrate, to the best of our knowledge, the first DRL with
stability guarantee on a real robotic manipulator. |
---|---|
DOI: | 10.48550/arxiv.2103.16432 |