Learning from Demonstrations using Signal Temporal Logic
Learning-from-demonstrations is an emerging paradigm to obtain effective robot control policies for complex tasks via reinforcement learning without the need to explicitly design reward functions. However, it is susceptible to imperfections in demonstrations and also raises concerns of safety and in...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Learning-from-demonstrations is an emerging paradigm to obtain effective
robot control policies for complex tasks via reinforcement learning without the
need to explicitly design reward functions. However, it is susceptible to
imperfections in demonstrations and also raises concerns of safety and
interpretability in the learned control policies. To address these issues, we
use Signal Temporal Logic to evaluate and rank the quality of demonstrations.
Temporal logic-based specifications allow us to create non-Markovian rewards,
and also define interesting causal dependencies between tasks such as
sequential task specifications. We validate our approach through experiments on
discrete-world and OpenAI Gym environments, and show that our approach
outperforms the state-of-the-art Maximum Causal Entropy Inverse Reinforcement
Learning. |
---|---|
DOI: | 10.48550/arxiv.2102.07730 |