Delay-Adapted Policy Optimization and Improved Regret for Adversarial MDP with Delayed Bandit Feedback
Policy Optimization (PO) is one of the most popular methods in Reinforcement Learning (RL). Thus, theoretical guarantees for PO algorithms have become especially important to the RL community. In this paper, we study PO in adversarial MDPs with a challenge that arises in almost every real-world appl...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Policy Optimization (PO) is one of the most popular methods in Reinforcement
Learning (RL). Thus, theoretical guarantees for PO algorithms have become
especially important to the RL community. In this paper, we study PO in
adversarial MDPs with a challenge that arises in almost every real-world
application -- \textit{delayed bandit feedback}. We give the first near-optimal
regret bounds for PO in tabular MDPs, and may even surpass state-of-the-art
(which uses less efficient methods). Our novel Delay-Adapted PO (DAPO) is easy
to implement and to generalize, allowing us to extend our algorithm to: (i)
infinite state space under the assumption of linear $Q$-function, proving the
first regret bounds for delayed feedback with function approximation. (ii) deep
RL, demonstrating its effectiveness in experiments on MuJoCo domains. |
---|---|
DOI: | 10.48550/arxiv.2305.07911 |