Computationally Efficient Horizon-Free Reinforcement Learning for Linear Mixture MDPs
Recent studies have shown that episodic reinforcement learning (RL) is not more difficult than contextual bandits, even with a long planning horizon and unknown state transitions. However, these results are limited to either tabular Markov decision processes (MDPs) or computationally inefficient alg...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Recent studies have shown that episodic reinforcement learning (RL) is not
more difficult than contextual bandits, even with a long planning horizon and
unknown state transitions. However, these results are limited to either tabular
Markov decision processes (MDPs) or computationally inefficient algorithms for
linear mixture MDPs. In this paper, we propose the first computationally
efficient horizon-free algorithm for linear mixture MDPs, which achieves the
optimal $\tilde O(d\sqrt{K} +d^2)$ regret up to logarithmic factors. Our
algorithm adapts a weighted least square estimator for the unknown transitional
dynamic, where the weight is both \emph{variance-aware} and
\emph{uncertainty-aware}. When applying our weighted least square estimator to
heterogeneous linear bandits, we can obtain an $\tilde O(d\sqrt{\sum_{k=1}^K
\sigma_k^2} +d)$ regret in the first $K$ rounds, where $d$ is the dimension of
the context and $\sigma_k^2$ is the variance of the reward in the $k$-th round.
This also improves upon the best-known algorithms in this setting when
$\sigma_k^2$'s are known. |
---|---|
DOI: | 10.48550/arxiv.2205.11507 |