No-Regret Exploration in Goal-Oriented Reinforcement Learning
International Conference on Machine Learning (ICML 2020) Many popular reinforcement learning problems (e.g., navigation in a maze, some Atari games, mountain car) are instances of the episodic setting under its stochastic shortest path (SSP) formulation, where an agent has to achieve a goal state wh...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | International Conference on Machine Learning (ICML 2020) Many popular reinforcement learning problems (e.g., navigation in a maze,
some Atari games, mountain car) are instances of the episodic setting under its
stochastic shortest path (SSP) formulation, where an agent has to achieve a
goal state while minimizing the cumulative cost. Despite the popularity of this
setting, the exploration-exploitation dilemma has been sparsely studied in
general SSP problems, with most of the theoretical literature focusing on
different problems (i.e., fixed-horizon and infinite-horizon) or making the
restrictive loop-free SSP assumption (i.e., no state can be visited twice
during an episode). In this paper, we study the general SSP problem with no
assumption on its dynamics (some policies may actually never reach the goal).
We introduce UC-SSP, the first no-regret algorithm in this setting, and prove a
regret bound scaling as $\displaystyle \widetilde{\mathcal{O}}( D S \sqrt{ A D
K})$ after $K$ episodes for any unknown SSP with $S$ states, $A$ actions,
positive costs and SSP-diameter $D$, defined as the smallest expected hitting
time from any starting state to the goal. We achieve this result by crafting a
novel stopping rule, such that UC-SSP may interrupt the current policy if it is
taking too long to achieve the goal and switch to alternative policies that are
designed to rapidly terminate the episode. |
---|---|
DOI: | 10.48550/arxiv.1912.03517 |