Entropy regularized reinforcement learning using large deviation theory
Reinforcement learning (RL) is an important field of research in machine learning that is increasingly being applied to complex optimization problems in physics. In parallel, concepts from physics have contributed to important advances in RL with developments such as entropy-regularized RL. While th...
Gespeichert in:
Veröffentlicht in: | Physical review research 2023-05, Vol.5 (2), p.023085, Article 023085 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Reinforcement learning (RL) is an important field of research in machine learning that is increasingly being applied to complex optimization problems in physics. In parallel, concepts from physics have contributed to important advances in RL with developments such as entropy-regularized RL. While these developments have led to advances in both fields, obtaining analytical solutions for optimization in entropy-regularized RL is currently an open problem. In this paper, we establish a mapping between entropy-regularized RL and research in nonequilibrium statistical mechanics focusing on Markovian processes conditioned on rare events. In the long-time limit, we apply approaches from large deviation theory to derive exact analytical results for the optimal policy and optimal dynamics in Markov decision process (MDP) models of reinforcement learning. The results obtained lead to an analytical and computational framework for entropy-regularized RL which is validated by simulations. The mapping established in this work connects current research in reinforcement learning and nonequilibrium statistical mechanics, thereby opening avenues for the application of analytical and computational approaches from one field to cutting-edge problems in the other. |
---|---|
ISSN: | 2643-1564 2643-1564 |
DOI: | 10.1103/PhysRevResearch.5.023085 |