Finite-Sample Analysis of Two-Time-Scale Natural Actor-Critic Algorithm

Actor-critic style two-time-scale algorithms are one of the most popular methods in reinforcement learning, and have seen great empirical success. However, their performance is not completely understood theoretically. In this article, we characterize the global convergence of an online natural actor...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on automatic control 2023-06, Vol.68 (6), p.3273-3284
Hauptverfasser: Khodadadian, Sajad, Doan, Thinh T., Romberg, Justin, Maguluri, Siva Theja
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Actor-critic style two-time-scale algorithms are one of the most popular methods in reinforcement learning, and have seen great empirical success. However, their performance is not completely understood theoretically. In this article, we characterize the global convergence of an online natural actor-critic algorithm in the tabular setting using a single trajectory of samples. Our analysis applies to very general settings, as we only assume ergodicity of the underlying Markov decision process. In order to ensure enough exploration, we employ an \epsilon-greedy sampling of the trajectory. For a fixed and small enough exploration parameter \epsilon, we show that the two-time-scale natural actor-critic algorithm has a rate of convergence of \tilde{\mathcal {O}}(1/T^{1/4}), where T is the number of samples, and this leads to a sample complexity of \tilde{\mathcal {O}}(1/\delta ^{8}) samples to find a policy that is within an error of \delta from the global optimum. Moreover, by carefully decreasing the exploration parameter \epsilon as the iterations proceed, we present an improved sample complexity of \tilde{\mathcal {O}}(1/\delta ^{6}) for convergence to the global optimum.
ISSN:0018-9286
1558-2523
DOI:10.1109/TAC.2022.3190032