Variance Reduction for Policy Gradient with Action-Dependent Factorized Baselines
Policy gradient methods have enjoyed great success in deep reinforcement learning but suffer from high variance of gradient estimates. The high variance problem is particularly exasperated in problems with long horizons or high-dimensional action spaces. To mitigate this issue, we derive a bias-free...
Gespeichert in:
Hauptverfasser: | , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Policy gradient methods have enjoyed great success in deep reinforcement
learning but suffer from high variance of gradient estimates. The high variance
problem is particularly exasperated in problems with long horizons or
high-dimensional action spaces. To mitigate this issue, we derive a bias-free
action-dependent baseline for variance reduction which fully exploits the
structural form of the stochastic policy itself and does not make any
additional assumptions about the MDP. We demonstrate and quantify the benefit
of the action-dependent baseline through both theoretical analysis as well as
numerical results, including an analysis of the suboptimality of the optimal
state-dependent baseline. The result is a computationally efficient policy
gradient algorithm, which scales to high-dimensional control problems, as
demonstrated by a synthetic 2000-dimensional target matching task. Our
experimental results indicate that action-dependent baselines allow for faster
learning on standard reinforcement learning benchmarks and high-dimensional
hand manipulation and synthetic tasks. Finally, we show that the general idea
of including additional information in baselines for improved variance
reduction can be extended to partially observed and multi-agent tasks. |
---|---|
DOI: | 10.48550/arxiv.1803.07246 |