Towards Instance-Optimality in Online PAC Reinforcement Learning
Several recent works have proposed instance-dependent upper bounds on the number of episodes needed to identify, with probability $1-\delta$, an $\varepsilon$-optimal policy in finite-horizon tabular Markov Decision Processes (MDPs). These upper bounds feature various complexity measures for the MDP...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Several recent works have proposed instance-dependent upper bounds on the
number of episodes needed to identify, with probability $1-\delta$, an
$\varepsilon$-optimal policy in finite-horizon tabular Markov Decision
Processes (MDPs). These upper bounds feature various complexity measures for
the MDP, which are defined based on different notions of sub-optimality gaps.
However, as of now, no lower bound has been established to assess the
optimality of any of these complexity measures, except for the special case of
MDPs with deterministic transitions. In this paper, we propose the first
instance-dependent lower bound on the sample complexity required for the PAC
identification of a near-optimal policy in any tabular episodic MDP.
Additionally, we demonstrate that the sample complexity of the PEDEL algorithm
of \cite{Wagenmaker22linearMDP} closely approaches this lower bound.
Considering the intractability of PEDEL, we formulate an open question
regarding the possibility of achieving our lower bound using a
computationally-efficient algorithm. |
---|---|
DOI: | 10.48550/arxiv.2311.05638 |