Understanding Curriculum Learning in Policy Optimization for Online Combinatorial Optimization

Over the recent years, reinforcement learning (RL) starts to show promising results in tackling combinatorial optimization (CO) problems, in particular when coupled with curriculum learning to facilitate training. Despite emerging empirical evidence, theoretical study on why RL helps is still at its...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Zhou, Runlong, He, Zelin, Tian, Yuandong, Wu, Yi, Du, Simon S
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Zhou, Runlong
He, Zelin
Tian, Yuandong
Wu, Yi
Du, Simon S
description Over the recent years, reinforcement learning (RL) starts to show promising results in tackling combinatorial optimization (CO) problems, in particular when coupled with curriculum learning to facilitate training. Despite emerging empirical evidence, theoretical study on why RL helps is still at its early stage. This paper presents the first systematic study on policy optimization methods for online CO problems. We show that online CO problems can be naturally formulated as latent Markov Decision Processes (LMDPs), and prove convergence bounds on natural policy gradient (NPG) for solving LMDPs. Furthermore, our theory explains the benefit of curriculum learning: it can find a strong sampling policy and reduce the distribution shift, a critical quantity that governs the convergence rate in our theorem. For a canonical online CO problem, the Best Choice Problem (BCP), we formally prove that distribution shift is reduced exponentially with curriculum learning even if the curriculum is a randomly generated BCP on a smaller scale. Our theory also shows we can simplify the curriculum learning scheme used in prior work from multi-step to single-step. Lastly, we provide extensive experiments on the Best Choice Problem, Online Knapsack, and AdWords to verify our findings.
doi_str_mv 10.48550/arxiv.2202.05423
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2202_05423</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2202_05423</sourcerecordid><originalsourceid>FETCH-LOGICAL-a673-800a1d0e07e48c6e2336733b393c9103dc9d73e6073aedd0aacc9a1dcff7b7583</originalsourceid><addsrcrecordid>eNpVj81KxDAUhbNxIaMP4Mq8QOttbtu0Syn-QaEuxq3lNknlQpoOmVYcn96Z0Y2rAx_nHPiEuMkgzauigDuKX_yZKgUqhSJXeCne34J1cb9QsBw-ZLPGyGb16yRbRzGcGAf5Ons2B9ntFp74mxaegxznKLvgOTjZzNPAgZY5Mvl_rStxMZLfu-u_3Ijt48O2eU7a7umluW8TKjUmFQBlFhxol1emdArxiHHAGk2dAVpTW42uBI3krAUiY-rjwoyjHnRR4Ubc_t6eBftd5InioT-J9mdR_AEAqVA5</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Understanding Curriculum Learning in Policy Optimization for Online Combinatorial Optimization</title><source>arXiv.org</source><creator>Zhou, Runlong ; He, Zelin ; Tian, Yuandong ; Wu, Yi ; Du, Simon S</creator><creatorcontrib>Zhou, Runlong ; He, Zelin ; Tian, Yuandong ; Wu, Yi ; Du, Simon S</creatorcontrib><description>Over the recent years, reinforcement learning (RL) starts to show promising results in tackling combinatorial optimization (CO) problems, in particular when coupled with curriculum learning to facilitate training. Despite emerging empirical evidence, theoretical study on why RL helps is still at its early stage. This paper presents the first systematic study on policy optimization methods for online CO problems. We show that online CO problems can be naturally formulated as latent Markov Decision Processes (LMDPs), and prove convergence bounds on natural policy gradient (NPG) for solving LMDPs. Furthermore, our theory explains the benefit of curriculum learning: it can find a strong sampling policy and reduce the distribution shift, a critical quantity that governs the convergence rate in our theorem. For a canonical online CO problem, the Best Choice Problem (BCP), we formally prove that distribution shift is reduced exponentially with curriculum learning even if the curriculum is a randomly generated BCP on a smaller scale. Our theory also shows we can simplify the curriculum learning scheme used in prior work from multi-step to single-step. Lastly, we provide extensive experiments on the Best Choice Problem, Online Knapsack, and AdWords to verify our findings.</description><identifier>DOI: 10.48550/arxiv.2202.05423</identifier><language>eng</language><subject>Computer Science - Learning</subject><creationdate>2022-02</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2202.05423$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2202.05423$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Zhou, Runlong</creatorcontrib><creatorcontrib>He, Zelin</creatorcontrib><creatorcontrib>Tian, Yuandong</creatorcontrib><creatorcontrib>Wu, Yi</creatorcontrib><creatorcontrib>Du, Simon S</creatorcontrib><title>Understanding Curriculum Learning in Policy Optimization for Online Combinatorial Optimization</title><description>Over the recent years, reinforcement learning (RL) starts to show promising results in tackling combinatorial optimization (CO) problems, in particular when coupled with curriculum learning to facilitate training. Despite emerging empirical evidence, theoretical study on why RL helps is still at its early stage. This paper presents the first systematic study on policy optimization methods for online CO problems. We show that online CO problems can be naturally formulated as latent Markov Decision Processes (LMDPs), and prove convergence bounds on natural policy gradient (NPG) for solving LMDPs. Furthermore, our theory explains the benefit of curriculum learning: it can find a strong sampling policy and reduce the distribution shift, a critical quantity that governs the convergence rate in our theorem. For a canonical online CO problem, the Best Choice Problem (BCP), we formally prove that distribution shift is reduced exponentially with curriculum learning even if the curriculum is a randomly generated BCP on a smaller scale. Our theory also shows we can simplify the curriculum learning scheme used in prior work from multi-step to single-step. Lastly, we provide extensive experiments on the Best Choice Problem, Online Knapsack, and AdWords to verify our findings.</description><subject>Computer Science - Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNpVj81KxDAUhbNxIaMP4Mq8QOttbtu0Syn-QaEuxq3lNknlQpoOmVYcn96Z0Y2rAx_nHPiEuMkgzauigDuKX_yZKgUqhSJXeCne34J1cb9QsBw-ZLPGyGb16yRbRzGcGAf5Ons2B9ntFp74mxaegxznKLvgOTjZzNPAgZY5Mvl_rStxMZLfu-u_3Ijt48O2eU7a7umluW8TKjUmFQBlFhxol1emdArxiHHAGk2dAVpTW42uBI3krAUiY-rjwoyjHnRR4Ubc_t6eBftd5InioT-J9mdR_AEAqVA5</recordid><startdate>20220210</startdate><enddate>20220210</enddate><creator>Zhou, Runlong</creator><creator>He, Zelin</creator><creator>Tian, Yuandong</creator><creator>Wu, Yi</creator><creator>Du, Simon S</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20220210</creationdate><title>Understanding Curriculum Learning in Policy Optimization for Online Combinatorial Optimization</title><author>Zhou, Runlong ; He, Zelin ; Tian, Yuandong ; Wu, Yi ; Du, Simon S</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a673-800a1d0e07e48c6e2336733b393c9103dc9d73e6073aedd0aacc9a1dcff7b7583</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Computer Science - Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Zhou, Runlong</creatorcontrib><creatorcontrib>He, Zelin</creatorcontrib><creatorcontrib>Tian, Yuandong</creatorcontrib><creatorcontrib>Wu, Yi</creatorcontrib><creatorcontrib>Du, Simon S</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Zhou, Runlong</au><au>He, Zelin</au><au>Tian, Yuandong</au><au>Wu, Yi</au><au>Du, Simon S</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Understanding Curriculum Learning in Policy Optimization for Online Combinatorial Optimization</atitle><date>2022-02-10</date><risdate>2022</risdate><abstract>Over the recent years, reinforcement learning (RL) starts to show promising results in tackling combinatorial optimization (CO) problems, in particular when coupled with curriculum learning to facilitate training. Despite emerging empirical evidence, theoretical study on why RL helps is still at its early stage. This paper presents the first systematic study on policy optimization methods for online CO problems. We show that online CO problems can be naturally formulated as latent Markov Decision Processes (LMDPs), and prove convergence bounds on natural policy gradient (NPG) for solving LMDPs. Furthermore, our theory explains the benefit of curriculum learning: it can find a strong sampling policy and reduce the distribution shift, a critical quantity that governs the convergence rate in our theorem. For a canonical online CO problem, the Best Choice Problem (BCP), we formally prove that distribution shift is reduced exponentially with curriculum learning even if the curriculum is a randomly generated BCP on a smaller scale. Our theory also shows we can simplify the curriculum learning scheme used in prior work from multi-step to single-step. Lastly, we provide extensive experiments on the Best Choice Problem, Online Knapsack, and AdWords to verify our findings.</abstract><doi>10.48550/arxiv.2202.05423</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2202.05423
ispartof
issn
language eng
recordid cdi_arxiv_primary_2202_05423
source arXiv.org
subjects Computer Science - Learning
title Understanding Curriculum Learning in Policy Optimization for Online Combinatorial Optimization
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-22T10%3A06%3A54IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Understanding%20Curriculum%20Learning%20in%20Policy%20Optimization%20for%20Online%20Combinatorial%20Optimization&rft.au=Zhou,%20Runlong&rft.date=2022-02-10&rft_id=info:doi/10.48550/arxiv.2202.05423&rft_dat=%3Carxiv_GOX%3E2202_05423%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true