Understanding Curriculum Learning in Policy Optimization for Online Combinatorial Optimization
Over the recent years, reinforcement learning (RL) starts to show promising results in tackling combinatorial optimization (CO) problems, in particular when coupled with curriculum learning to facilitate training. Despite emerging empirical evidence, theoretical study on why RL helps is still at its...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Over the recent years, reinforcement learning (RL) starts to show promising
results in tackling combinatorial optimization (CO) problems, in particular
when coupled with curriculum learning to facilitate training. Despite emerging
empirical evidence, theoretical study on why RL helps is still at its early
stage. This paper presents the first systematic study on policy optimization
methods for online CO problems. We show that online CO problems can be
naturally formulated as latent Markov Decision Processes (LMDPs), and prove
convergence bounds on natural policy gradient (NPG) for solving LMDPs.
Furthermore, our theory explains the benefit of curriculum learning: it can
find a strong sampling policy and reduce the distribution shift, a critical
quantity that governs the convergence rate in our theorem. For a canonical
online CO problem, the Best Choice Problem (BCP), we formally prove that
distribution shift is reduced exponentially with curriculum learning even if
the curriculum is a randomly generated BCP on a smaller scale. Our theory also
shows we can simplify the curriculum learning scheme used in prior work from
multi-step to single-step. Lastly, we provide extensive experiments on the Best
Choice Problem, Online Knapsack, and AdWords to verify our findings. |
---|---|
DOI: | 10.48550/arxiv.2202.05423 |