Sample Complexity of Estimating the Policy Gradient for Nearly Deterministic Dynamical Systems
Reinforcement learning is a promising approach to learning robotics controllers. It has recently been shown that algorithms based on finite-difference estimates of the policy gradient are competitive with algorithms based on the policy gradient theorem. We propose a theoretical framework for underst...
Gespeichert in:
1. Verfasser: | |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Reinforcement learning is a promising approach to learning robotics
controllers. It has recently been shown that algorithms based on
finite-difference estimates of the policy gradient are competitive with
algorithms based on the policy gradient theorem. We propose a theoretical
framework for understanding this phenomenon. Our key insight is that many
dynamical systems (especially those of interest in robotics control tasks) are
nearly deterministic -- i.e., they can be modeled as a deterministic system
with a small stochastic perturbation. We show that for such systems,
finite-difference estimates of the policy gradient can have substantially lower
variance than estimates based on the policy gradient theorem. Finally, we
empirically evaluate our insights in an experiment on the inverted pendulum. |
---|---|
DOI: | 10.48550/arxiv.1901.08562 |