Asynchronous Parallel Policy Gradient Methods for the Linear Quadratic Regulator
Learning policies in an asynchronous parallel way is essential to the numerous successes of RL for solving large-scale problems. However, their convergence performance is still not rigorously evaluated. To this end, we adopt the asynchronous parallel zero-order policy gradient (AZOPG) method to solv...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Learning policies in an asynchronous parallel way is essential to the
numerous successes of RL for solving large-scale problems. However, their
convergence performance is still not rigorously evaluated. To this end, we
adopt the asynchronous parallel zero-order policy gradient (AZOPG) method to
solve the continuous-time linear quadratic regulation problem. Specifically, as
in the celebrated A3C algorithm, there are multiple parallel workers to
asynchronously estimate PGs which are then sent to a central master for policy
updates. Via quantifying its convergence rate of policy iterations, we show the
linear speedup property of the AZOPG, both in theory and simulation, which
clearly reveals the advantages of using parallel workers for learning policies. |
---|---|
DOI: | 10.48550/arxiv.2407.03233 |