Improved Sample Complexity for Reward-free Reinforcement Learning under Low-rank MDPs
In reward-free reinforcement learning (RL), an agent explores the environment first without any reward information, in order to achieve certain learning goals afterwards for any given reward. In this paper we focus on reward-free RL under low-rank MDP models, in which both the representation and lin...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | In reward-free reinforcement learning (RL), an agent explores the environment
first without any reward information, in order to achieve certain learning
goals afterwards for any given reward. In this paper we focus on reward-free RL
under low-rank MDP models, in which both the representation and linear weight
vectors are unknown. Although various algorithms have been proposed for
reward-free low-rank MDPs, the corresponding sample complexity is still far
from being satisfactory. In this work, we first provide the first known sample
complexity lower bound that holds for any algorithm under low-rank MDPs. This
lower bound implies it is strictly harder to find a near-optimal policy under
low-rank MDPs than under linear MDPs. We then propose a novel model-based
algorithm, coined RAFFLE, and show it can both find an $\epsilon$-optimal
policy and achieve an $\epsilon$-accurate system identification via reward-free
exploration, with a sample complexity significantly improving the previous
results. Such a sample complexity matches our lower bound in the dependence on
$\epsilon$, as well as on $K$ in the large $d$ regime, where $d$ and $K$
respectively denote the representation dimension and action space cardinality.
Finally, we provide a planning algorithm (without further interaction with true
environment) for RAFFLE to learn a near-accurate representation, which is the
first known representation learning guarantee under the same setting. |
---|---|
DOI: | 10.48550/arxiv.2303.10859 |