Model-Free Reinforcement Learning for Symbolic Automata-encoded Objectives
Reinforcement learning (RL) is a popular approach for robotic path planning in uncertain environments. However, the control policies trained for an RL agent crucially depend on user-defined, state-based reward functions. Poorly designed rewards can lead to policies that do get maximal rewards but fa...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Reinforcement learning (RL) is a popular approach for robotic path planning
in uncertain environments. However, the control policies trained for an RL
agent crucially depend on user-defined, state-based reward functions. Poorly
designed rewards can lead to policies that do get maximal rewards but fail to
satisfy desired task objectives or are unsafe. There are several examples of
the use of formal languages such as temporal logics and automata to specify
high-level task specifications for robots (in lieu of Markovian rewards).
Recent efforts have focused on inferring state-based rewards from formal
specifications; here, the goal is to provide (probabilistic) guarantees that
the policy learned using RL (with the inferred rewards) satisfies the
high-level formal specification. A key drawback of several of these techniques
is that the rewards that they infer are sparse: the agent receives positive
rewards only upon completion of the task and no rewards otherwise. This
naturally leads to poor convergence properties and high variance during RL. In
this work, we propose using formal specifications in the form of symbolic
automata: these serve as a generalization of both bounded-time temporal
logic-based specifications as well as automata. Furthermore, our use of
symbolic automata allows us to define non-sparse potential-based rewards which
empirically shape the reward surface, leading to better convergence during RL.
We also show that our potential-based rewarding strategy still allows us to
obtain the policy that maximizes the satisfaction of the given specification. |
---|---|
DOI: | 10.48550/arxiv.2202.02404 |