Near Instance-Optimal PAC Reinforcement Learning for Deterministic MDPs
In probably approximately correct (PAC) reinforcement learning (RL), an agent is required to identify an $\epsilon$-optimal policy with probability $1-\delta$. While minimax optimal algorithms exist for this problem, its instance-dependent complexity remains elusive in episodic Markov decision proce...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | In probably approximately correct (PAC) reinforcement learning (RL), an agent
is required to identify an $\epsilon$-optimal policy with probability
$1-\delta$. While minimax optimal algorithms exist for this problem, its
instance-dependent complexity remains elusive in episodic Markov decision
processes (MDPs). In this paper, we propose the first nearly matching (up to a
horizon squared factor and logarithmic terms) upper and lower bounds on the
sample complexity of PAC RL in deterministic episodic MDPs with finite state
and action spaces. In particular, our bounds feature a new notion of
sub-optimality gap for state-action pairs that we call the deterministic return
gap. While our instance-dependent lower bound is written as a linear program,
our algorithms are very simple and do not require solving such an optimization
problem during learning. Their design and analyses employ novel ideas,
including graph-theoretical concepts (minimum flows) and a new maximum-coverage
exploration strategy. |
---|---|
DOI: | 10.48550/arxiv.2203.09251 |