An Accelerated Newton–Dinkelbach Method and Its Application to Two Variables per Inequality Systems

We present an accelerated or “look-ahead” version of the Newton–Dinkelbach method, a well-known technique for solving fractional and parametric optimization problems. This acceleration halves the Bregman divergence between the current iterate and the optimal solution within every two iterations. Usi...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Mathematics of operations research 2023-11, Vol.48 (4), p.1934-1958
1. Verfasser: Dadush, Daniel
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 1958
container_issue 4
container_start_page 1934
container_title Mathematics of operations research
container_volume 48
creator Dadush, Daniel
description We present an accelerated or “look-ahead” version of the Newton–Dinkelbach method, a well-known technique for solving fractional and parametric optimization problems. This acceleration halves the Bregman divergence between the current iterate and the optimal solution within every two iterations. Using the Bregman divergence as a potential in conjunction with combinatorial arguments, we obtain strongly polynomial algorithms in three applications domains. (i) For linear fractional combinatorial optimization, we show a convergence bound of O ( m log m ) iterations; the previous best bound was O ( m 2 log m ) by Wang, Yang, and Zhang from 2006. (ii) We obtain a strongly polynomial label-correcting algorithm for solving linear feasibility systems with two variables per inequality (2VPI). For a 2VPI system with n variables and m constraints, our algorithm runs in O ( mn ) iterations. Every iteration takes O ( mn ) time for general 2VPI systems and O ( m + n log n ) time for the special case of deterministic Markov decision processes (DMDPs). This extends and strengthens a previous result by Madani from 2002 that showed a weakly polynomial bound for a variant of the Newton–Dinkelbach method for solving DMDPs. (iii) We give a simplified variant of the parametric submodular function minimization result from 2017 by Goemans, Gupta, and Jaillet. Funding: This project received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme [Grants 757481-ScaleOpt and 805241-QIP].
doi_str_mv 10.1287/moor.2022.1326
format Article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_3060826120</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>3060826120</sourcerecordid><originalsourceid>FETCH-LOGICAL-c357t-d2ce1f01e49b43513069448af8f1d6fe6e5fb87c4aba806b668ab5489b70d7bf3</originalsourceid><addsrcrecordid>eNqFkLtOwzAUhi0EEuWyMltiTrEdx3bGimslLgMXsUW2cywCqR1sV6gb78Ab8iSkKhIj01m-_zvSh9ARJVPKlDxZhBCnjDA2pSUTW2hCKyaKiku6jSakFLyQonreRXspvRJCK0n5BMHM45m10EPUGVp8Cx85-O_Pr7POv0FvtH3BN5BfQou1b_E8Jzwbhr6zOnfB4xzww0fATzp22vSQ8AARzz28L3Xf5RW-X6UMi3SAdpzuExz-3n30eHH-cHpVXN9dzk9n14UtK5mLllmgjlDgteFlRUsias6VdsrRVjgQUDmjpOXaaEWEEUJpU3FVG0laaVy5j4433iGG9yWk3LyGZfTjy2Z0EcUEZWSkphvKxpBSBNcMsVvouGooadYpm3XKZp2yWaccB3gzABt8l_5wJUtSM8XrESk2SOddiIv0n_IH0PSCeA</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3060826120</pqid></control><display><type>article</type><title>An Accelerated Newton–Dinkelbach Method and Its Application to Two Variables per Inequality Systems</title><source>INFORMS PubsOnLine</source><creator>Dadush, Daniel</creator><creatorcontrib>Dadush, Daniel</creatorcontrib><description>We present an accelerated or “look-ahead” version of the Newton–Dinkelbach method, a well-known technique for solving fractional and parametric optimization problems. This acceleration halves the Bregman divergence between the current iterate and the optimal solution within every two iterations. Using the Bregman divergence as a potential in conjunction with combinatorial arguments, we obtain strongly polynomial algorithms in three applications domains. (i) For linear fractional combinatorial optimization, we show a convergence bound of O ( m log m ) iterations; the previous best bound was O ( m 2 log m ) by Wang, Yang, and Zhang from 2006. (ii) We obtain a strongly polynomial label-correcting algorithm for solving linear feasibility systems with two variables per inequality (2VPI). For a 2VPI system with n variables and m constraints, our algorithm runs in O ( mn ) iterations. Every iteration takes O ( mn ) time for general 2VPI systems and O ( m + n log n ) time for the special case of deterministic Markov decision processes (DMDPs). This extends and strengthens a previous result by Madani from 2002 that showed a weakly polynomial bound for a variant of the Newton–Dinkelbach method for solving DMDPs. (iii) We give a simplified variant of the parametric submodular function minimization result from 2017 by Goemans, Gupta, and Jaillet. Funding: This project received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme [Grants 757481-ScaleOpt and 805241-QIP].</description><identifier>ISSN: 0364-765X</identifier><identifier>EISSN: 1526-5471</identifier><identifier>DOI: 10.1287/moor.2022.1326</identifier><language>eng</language><publisher>Linthicum: INFORMS</publisher><subject>68W40 ; 90C05 ; 90C32 ; 90C40 ; Algorithms ; Combinatorial analysis ; Divergence ; fractional optimization ; Markov analysis ; Markov decision process ; Markov processes ; Newton–Dinkelbach method ; Optimization ; parametric optimization ; Polynomials ; Primary: 49M15 ; secondary: 90C27 ; strongly polynomial algorithm ; submodular function minimization ; two variables per inequality system</subject><ispartof>Mathematics of operations research, 2023-11, Vol.48 (4), p.1934-1958</ispartof><rights>Copyright Institute for Operations Research and the Management Sciences Nov 2023</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c357t-d2ce1f01e49b43513069448af8f1d6fe6e5fb87c4aba806b668ab5489b70d7bf3</cites><orcidid>0000-0002-4450-8506 ; 0000-0002-8068-3280 ; 0000-0003-1152-200X ; 0000-0001-5577-5012</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://pubsonline.informs.org/doi/full/10.1287/moor.2022.1326$$EHTML$$P50$$Ginforms$$H</linktohtml><link.rule.ids>314,780,784,3692,27924,27925,62616</link.rule.ids></links><search><creatorcontrib>Dadush, Daniel</creatorcontrib><title>An Accelerated Newton–Dinkelbach Method and Its Application to Two Variables per Inequality Systems</title><title>Mathematics of operations research</title><description>We present an accelerated or “look-ahead” version of the Newton–Dinkelbach method, a well-known technique for solving fractional and parametric optimization problems. This acceleration halves the Bregman divergence between the current iterate and the optimal solution within every two iterations. Using the Bregman divergence as a potential in conjunction with combinatorial arguments, we obtain strongly polynomial algorithms in three applications domains. (i) For linear fractional combinatorial optimization, we show a convergence bound of O ( m log m ) iterations; the previous best bound was O ( m 2 log m ) by Wang, Yang, and Zhang from 2006. (ii) We obtain a strongly polynomial label-correcting algorithm for solving linear feasibility systems with two variables per inequality (2VPI). For a 2VPI system with n variables and m constraints, our algorithm runs in O ( mn ) iterations. Every iteration takes O ( mn ) time for general 2VPI systems and O ( m + n log n ) time for the special case of deterministic Markov decision processes (DMDPs). This extends and strengthens a previous result by Madani from 2002 that showed a weakly polynomial bound for a variant of the Newton–Dinkelbach method for solving DMDPs. (iii) We give a simplified variant of the parametric submodular function minimization result from 2017 by Goemans, Gupta, and Jaillet. Funding: This project received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme [Grants 757481-ScaleOpt and 805241-QIP].</description><subject>68W40</subject><subject>90C05</subject><subject>90C32</subject><subject>90C40</subject><subject>Algorithms</subject><subject>Combinatorial analysis</subject><subject>Divergence</subject><subject>fractional optimization</subject><subject>Markov analysis</subject><subject>Markov decision process</subject><subject>Markov processes</subject><subject>Newton–Dinkelbach method</subject><subject>Optimization</subject><subject>parametric optimization</subject><subject>Polynomials</subject><subject>Primary: 49M15</subject><subject>secondary: 90C27</subject><subject>strongly polynomial algorithm</subject><subject>submodular function minimization</subject><subject>two variables per inequality system</subject><issn>0364-765X</issn><issn>1526-5471</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><recordid>eNqFkLtOwzAUhi0EEuWyMltiTrEdx3bGimslLgMXsUW2cywCqR1sV6gb78Ab8iSkKhIj01m-_zvSh9ARJVPKlDxZhBCnjDA2pSUTW2hCKyaKiku6jSakFLyQonreRXspvRJCK0n5BMHM45m10EPUGVp8Cx85-O_Pr7POv0FvtH3BN5BfQou1b_E8Jzwbhr6zOnfB4xzww0fATzp22vSQ8AARzz28L3Xf5RW-X6UMi3SAdpzuExz-3n30eHH-cHpVXN9dzk9n14UtK5mLllmgjlDgteFlRUsias6VdsrRVjgQUDmjpOXaaEWEEUJpU3FVG0laaVy5j4433iGG9yWk3LyGZfTjy2Z0EcUEZWSkphvKxpBSBNcMsVvouGooadYpm3XKZp2yWaccB3gzABt8l_5wJUtSM8XrESk2SOddiIv0n_IH0PSCeA</recordid><startdate>20231101</startdate><enddate>20231101</enddate><creator>Dadush, Daniel</creator><general>INFORMS</general><general>Institute for Operations Research and the Management Sciences</general><scope>OQ6</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>JQ2</scope><orcidid>https://orcid.org/0000-0002-4450-8506</orcidid><orcidid>https://orcid.org/0000-0002-8068-3280</orcidid><orcidid>https://orcid.org/0000-0003-1152-200X</orcidid><orcidid>https://orcid.org/0000-0001-5577-5012</orcidid></search><sort><creationdate>20231101</creationdate><title>An Accelerated Newton–Dinkelbach Method and Its Application to Two Variables per Inequality Systems</title><author>Dadush, Daniel</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c357t-d2ce1f01e49b43513069448af8f1d6fe6e5fb87c4aba806b668ab5489b70d7bf3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>68W40</topic><topic>90C05</topic><topic>90C32</topic><topic>90C40</topic><topic>Algorithms</topic><topic>Combinatorial analysis</topic><topic>Divergence</topic><topic>fractional optimization</topic><topic>Markov analysis</topic><topic>Markov decision process</topic><topic>Markov processes</topic><topic>Newton–Dinkelbach method</topic><topic>Optimization</topic><topic>parametric optimization</topic><topic>Polynomials</topic><topic>Primary: 49M15</topic><topic>secondary: 90C27</topic><topic>strongly polynomial algorithm</topic><topic>submodular function minimization</topic><topic>two variables per inequality system</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Dadush, Daniel</creatorcontrib><collection>ECONIS</collection><collection>CrossRef</collection><collection>ProQuest Computer Science Collection</collection><jtitle>Mathematics of operations research</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Dadush, Daniel</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>An Accelerated Newton–Dinkelbach Method and Its Application to Two Variables per Inequality Systems</atitle><jtitle>Mathematics of operations research</jtitle><date>2023-11-01</date><risdate>2023</risdate><volume>48</volume><issue>4</issue><spage>1934</spage><epage>1958</epage><pages>1934-1958</pages><issn>0364-765X</issn><eissn>1526-5471</eissn><abstract>We present an accelerated or “look-ahead” version of the Newton–Dinkelbach method, a well-known technique for solving fractional and parametric optimization problems. This acceleration halves the Bregman divergence between the current iterate and the optimal solution within every two iterations. Using the Bregman divergence as a potential in conjunction with combinatorial arguments, we obtain strongly polynomial algorithms in three applications domains. (i) For linear fractional combinatorial optimization, we show a convergence bound of O ( m log m ) iterations; the previous best bound was O ( m 2 log m ) by Wang, Yang, and Zhang from 2006. (ii) We obtain a strongly polynomial label-correcting algorithm for solving linear feasibility systems with two variables per inequality (2VPI). For a 2VPI system with n variables and m constraints, our algorithm runs in O ( mn ) iterations. Every iteration takes O ( mn ) time for general 2VPI systems and O ( m + n log n ) time for the special case of deterministic Markov decision processes (DMDPs). This extends and strengthens a previous result by Madani from 2002 that showed a weakly polynomial bound for a variant of the Newton–Dinkelbach method for solving DMDPs. (iii) We give a simplified variant of the parametric submodular function minimization result from 2017 by Goemans, Gupta, and Jaillet. Funding: This project received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme [Grants 757481-ScaleOpt and 805241-QIP].</abstract><cop>Linthicum</cop><pub>INFORMS</pub><doi>10.1287/moor.2022.1326</doi><tpages>25</tpages><orcidid>https://orcid.org/0000-0002-4450-8506</orcidid><orcidid>https://orcid.org/0000-0002-8068-3280</orcidid><orcidid>https://orcid.org/0000-0003-1152-200X</orcidid><orcidid>https://orcid.org/0000-0001-5577-5012</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 0364-765X
ispartof Mathematics of operations research, 2023-11, Vol.48 (4), p.1934-1958
issn 0364-765X
1526-5471
language eng
recordid cdi_proquest_journals_3060826120
source INFORMS PubsOnLine
subjects 68W40
90C05
90C32
90C40
Algorithms
Combinatorial analysis
Divergence
fractional optimization
Markov analysis
Markov decision process
Markov processes
Newton–Dinkelbach method
Optimization
parametric optimization
Polynomials
Primary: 49M15
secondary: 90C27
strongly polynomial algorithm
submodular function minimization
two variables per inequality system
title An Accelerated Newton–Dinkelbach Method and Its Application to Two Variables per Inequality Systems
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-04T21%3A20%3A04IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=An%20Accelerated%20Newton%E2%80%93Dinkelbach%20Method%20and%20Its%20Application%20to%20Two%20Variables%20per%20Inequality%20Systems&rft.jtitle=Mathematics%20of%20operations%20research&rft.au=Dadush,%20Daniel&rft.date=2023-11-01&rft.volume=48&rft.issue=4&rft.spage=1934&rft.epage=1958&rft.pages=1934-1958&rft.issn=0364-765X&rft.eissn=1526-5471&rft_id=info:doi/10.1287/moor.2022.1326&rft_dat=%3Cproquest_cross%3E3060826120%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=3060826120&rft_id=info:pmid/&rfr_iscdi=true