Global linear convergence of evolution strategies with recombination on scaling-invariant functions

Evolution Strategies (ESs) are stochastic derivative-free optimization algorithms whose most prominent representative, the CMA-ES algorithm, is widely used to solve difficult numerical optimization problems. We provide the first rigorous investigation of the linear convergence of step-size adaptive...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Journal of global optimization 2023-05, Vol.86 (1), p.163-203
Hauptverfasser: Toure, Cheikh, Auger, Anne, Hansen, Nikolaus
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 203
container_issue 1
container_start_page 163
container_title Journal of global optimization
container_volume 86
creator Toure, Cheikh
Auger, Anne
Hansen, Nikolaus
description Evolution Strategies (ESs) are stochastic derivative-free optimization algorithms whose most prominent representative, the CMA-ES algorithm, is widely used to solve difficult numerical optimization problems. We provide the first rigorous investigation of the linear convergence of step-size adaptive ESs involving a population and recombination, two ingredients crucially important in practice to be robust to local irregularities or multimodality. We investigate the convergence of step-size adaptive ESs with weighted recombination on composites of strictly increasing functions with continuously differentiable scaling-invariant functions with a global optimum. This function class includes functions with non-convex sublevel sets and discontinuous functions. We prove the existence of a constant r such that the logarithm of the distance to the optimum divided by the number of iterations converges to r . The constant is given as an expectation with respect to the stationary distribution of a Markov chain—its sign allows to infer linear convergence or divergence of the ES and is found numerically. Our main condition for convergence is the increase of the expected log step-size on linear functions. In contrast to previous results, our condition is equivalent to the almost sure geometric divergence of the step-size on linear functions.
doi_str_mv 10.1007/s10898-022-01249-6
format Article
fullrecord <record><control><sourceid>gale_hal_p</sourceid><recordid>TN_cdi_hal_primary_oai_HAL_hal_03286037v3</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><galeid>A746440968</galeid><sourcerecordid>A746440968</sourcerecordid><originalsourceid>FETCH-LOGICAL-c436t-a7d3862494b94ce56600943573f6e897e21464e3cae9d0cd7f55f89d8b7499273</originalsourceid><addsrcrecordid>eNp9kU1v1DAQhi0EEkvhD3CKxImDy_gj_jiuKmgrrcQFzpbXmaSusnaxs4v49zgNghvywdLM845fz0vIewbXDEB_qgyMNRQ4p8C4tFS9IDvWa0G5Zeol2YHlPe0B2GvyptZHALCm5zsSbud89HM3x4S-dCGnC5YJU8Aujx1e8nxeYk5dXYpfcIpYu59xeegKhnw6xuSfuysQfJsx0ZguvkSflm48p7B261vyavRzxXd_7ivy_cvnbzd39PD19v5mf6BBCrVQrwdhVDMvj1YG7JVqJqVonxgVGquRM6kkiuDRDhAGPfb9aOxgjlpay7W4Ih-3uQ9-dk8lnnz55bKP7m5_cGsNBDcKhL6Ixn7Y2KeSf5yxLu4xn0tq9hw3DQElQTXqeqMmP6OLacxtDaGdAU-x7QrH2Op73XxJsMo0Ad8EoeRaC45_fTBwa1JuS8q1pNxzUm59RWyi2uA0Yfnn5T-q32WIlhI</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2803706406</pqid></control><display><type>article</type><title>Global linear convergence of evolution strategies with recombination on scaling-invariant functions</title><source>SpringerLink Journals - AutoHoldings</source><creator>Toure, Cheikh ; Auger, Anne ; Hansen, Nikolaus</creator><creatorcontrib>Toure, Cheikh ; Auger, Anne ; Hansen, Nikolaus</creatorcontrib><description>Evolution Strategies (ESs) are stochastic derivative-free optimization algorithms whose most prominent representative, the CMA-ES algorithm, is widely used to solve difficult numerical optimization problems. We provide the first rigorous investigation of the linear convergence of step-size adaptive ESs involving a population and recombination, two ingredients crucially important in practice to be robust to local irregularities or multimodality. We investigate the convergence of step-size adaptive ESs with weighted recombination on composites of strictly increasing functions with continuously differentiable scaling-invariant functions with a global optimum. This function class includes functions with non-convex sublevel sets and discontinuous functions. We prove the existence of a constant r such that the logarithm of the distance to the optimum divided by the number of iterations converges to r . The constant is given as an expectation with respect to the stationary distribution of a Markov chain—its sign allows to infer linear convergence or divergence of the ES and is found numerically. Our main condition for convergence is the increase of the expected log step-size on linear functions. In contrast to previous results, our condition is equivalent to the almost sure geometric divergence of the step-size on linear functions.</description><identifier>ISSN: 0925-5001</identifier><identifier>EISSN: 1573-2916</identifier><identifier>DOI: 10.1007/s10898-022-01249-6</identifier><language>eng</language><publisher>New York: Springer US</publisher><subject>Adaptation ; Algorithms ; Computer Science ; Convergence ; Divergence ; Evolution ; Invariants ; Linear functions ; Markov analysis ; Markov chains ; Markov processes ; Mathematical optimization ; Mathematics ; Mathematics and Statistics ; Normal distribution ; Operations Research/Decision Theory ; Optimization ; Optimization algorithms ; Optimization and Control ; Probability ; Real Functions ; Robustness (mathematics) ; Stochastic models</subject><ispartof>Journal of global optimization, 2023-05, Vol.86 (1), p.163-203</ispartof><rights>The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2022. Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.</rights><rights>COPYRIGHT 2023 Springer</rights><rights>Attribution</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c436t-a7d3862494b94ce56600943573f6e897e21464e3cae9d0cd7f55f89d8b7499273</citedby><cites>FETCH-LOGICAL-c436t-a7d3862494b94ce56600943573f6e897e21464e3cae9d0cd7f55f89d8b7499273</cites><orcidid>0000-0001-7788-4906</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://link.springer.com/content/pdf/10.1007/s10898-022-01249-6$$EPDF$$P50$$Gspringer$$H</linktopdf><linktohtml>$$Uhttps://link.springer.com/10.1007/s10898-022-01249-6$$EHTML$$P50$$Gspringer$$H</linktohtml><link.rule.ids>230,314,776,780,881,27901,27902,41464,42533,51294</link.rule.ids><backlink>$$Uhttps://hal.science/hal-03286037$$DView record in HAL$$Hfree_for_read</backlink></links><search><creatorcontrib>Toure, Cheikh</creatorcontrib><creatorcontrib>Auger, Anne</creatorcontrib><creatorcontrib>Hansen, Nikolaus</creatorcontrib><title>Global linear convergence of evolution strategies with recombination on scaling-invariant functions</title><title>Journal of global optimization</title><addtitle>J Glob Optim</addtitle><description>Evolution Strategies (ESs) are stochastic derivative-free optimization algorithms whose most prominent representative, the CMA-ES algorithm, is widely used to solve difficult numerical optimization problems. We provide the first rigorous investigation of the linear convergence of step-size adaptive ESs involving a population and recombination, two ingredients crucially important in practice to be robust to local irregularities or multimodality. We investigate the convergence of step-size adaptive ESs with weighted recombination on composites of strictly increasing functions with continuously differentiable scaling-invariant functions with a global optimum. This function class includes functions with non-convex sublevel sets and discontinuous functions. We prove the existence of a constant r such that the logarithm of the distance to the optimum divided by the number of iterations converges to r . The constant is given as an expectation with respect to the stationary distribution of a Markov chain—its sign allows to infer linear convergence or divergence of the ES and is found numerically. Our main condition for convergence is the increase of the expected log step-size on linear functions. In contrast to previous results, our condition is equivalent to the almost sure geometric divergence of the step-size on linear functions.</description><subject>Adaptation</subject><subject>Algorithms</subject><subject>Computer Science</subject><subject>Convergence</subject><subject>Divergence</subject><subject>Evolution</subject><subject>Invariants</subject><subject>Linear functions</subject><subject>Markov analysis</subject><subject>Markov chains</subject><subject>Markov processes</subject><subject>Mathematical optimization</subject><subject>Mathematics</subject><subject>Mathematics and Statistics</subject><subject>Normal distribution</subject><subject>Operations Research/Decision Theory</subject><subject>Optimization</subject><subject>Optimization algorithms</subject><subject>Optimization and Control</subject><subject>Probability</subject><subject>Real Functions</subject><subject>Robustness (mathematics)</subject><subject>Stochastic models</subject><issn>0925-5001</issn><issn>1573-2916</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>8G5</sourceid><sourceid>BENPR</sourceid><sourceid>GUQSH</sourceid><sourceid>M2O</sourceid><recordid>eNp9kU1v1DAQhi0EEkvhD3CKxImDy_gj_jiuKmgrrcQFzpbXmaSusnaxs4v49zgNghvywdLM845fz0vIewbXDEB_qgyMNRQ4p8C4tFS9IDvWa0G5Zeol2YHlPe0B2GvyptZHALCm5zsSbud89HM3x4S-dCGnC5YJU8Aujx1e8nxeYk5dXYpfcIpYu59xeegKhnw6xuSfuysQfJsx0ZguvkSflm48p7B261vyavRzxXd_7ivy_cvnbzd39PD19v5mf6BBCrVQrwdhVDMvj1YG7JVqJqVonxgVGquRM6kkiuDRDhAGPfb9aOxgjlpay7W4Ih-3uQ9-dk8lnnz55bKP7m5_cGsNBDcKhL6Ixn7Y2KeSf5yxLu4xn0tq9hw3DQElQTXqeqMmP6OLacxtDaGdAU-x7QrH2Op73XxJsMo0Ad8EoeRaC45_fTBwa1JuS8q1pNxzUm59RWyi2uA0Yfnn5T-q32WIlhI</recordid><startdate>20230501</startdate><enddate>20230501</enddate><creator>Toure, Cheikh</creator><creator>Auger, Anne</creator><creator>Hansen, Nikolaus</creator><general>Springer US</general><general>Springer</general><general>Springer Nature B.V</general><general>Springer Verlag</general><scope>AAYXX</scope><scope>CITATION</scope><scope>3V.</scope><scope>7SC</scope><scope>7WY</scope><scope>7WZ</scope><scope>7XB</scope><scope>87Z</scope><scope>88I</scope><scope>8AL</scope><scope>8AO</scope><scope>8FD</scope><scope>8FE</scope><scope>8FG</scope><scope>8FK</scope><scope>8FL</scope><scope>8G5</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BEZIV</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>FRNLG</scope><scope>F~G</scope><scope>GNUQQ</scope><scope>GUQSH</scope><scope>HCIFZ</scope><scope>JQ2</scope><scope>K60</scope><scope>K6~</scope><scope>K7-</scope><scope>L.-</scope><scope>L6V</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>M0C</scope><scope>M0N</scope><scope>M2O</scope><scope>M2P</scope><scope>M7S</scope><scope>MBDVC</scope><scope>P5Z</scope><scope>P62</scope><scope>PQBIZ</scope><scope>PQBZA</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PTHSS</scope><scope>Q9U</scope><scope>1XC</scope><scope>VOOES</scope><orcidid>https://orcid.org/0000-0001-7788-4906</orcidid></search><sort><creationdate>20230501</creationdate><title>Global linear convergence of evolution strategies with recombination on scaling-invariant functions</title><author>Toure, Cheikh ; Auger, Anne ; Hansen, Nikolaus</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c436t-a7d3862494b94ce56600943573f6e897e21464e3cae9d0cd7f55f89d8b7499273</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Adaptation</topic><topic>Algorithms</topic><topic>Computer Science</topic><topic>Convergence</topic><topic>Divergence</topic><topic>Evolution</topic><topic>Invariants</topic><topic>Linear functions</topic><topic>Markov analysis</topic><topic>Markov chains</topic><topic>Markov processes</topic><topic>Mathematical optimization</topic><topic>Mathematics</topic><topic>Mathematics and Statistics</topic><topic>Normal distribution</topic><topic>Operations Research/Decision Theory</topic><topic>Optimization</topic><topic>Optimization algorithms</topic><topic>Optimization and Control</topic><topic>Probability</topic><topic>Real Functions</topic><topic>Robustness (mathematics)</topic><topic>Stochastic models</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Toure, Cheikh</creatorcontrib><creatorcontrib>Auger, Anne</creatorcontrib><creatorcontrib>Hansen, Nikolaus</creatorcontrib><collection>CrossRef</collection><collection>ProQuest Central (Corporate)</collection><collection>Computer and Information Systems Abstracts</collection><collection>ABI/INFORM Collection</collection><collection>ABI/INFORM Global (PDF only)</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>ABI/INFORM Global (Alumni Edition)</collection><collection>Science Database (Alumni Edition)</collection><collection>Computing Database (Alumni Edition)</collection><collection>ProQuest Pharma Collection</collection><collection>Technology Research Database</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Central (Alumni) (purchase pre-March 2016)</collection><collection>ABI/INFORM Collection (Alumni Edition)</collection><collection>Research Library (Alumni Edition)</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>Advanced Technologies &amp; Aerospace Collection</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Business Premium Collection</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>Business Premium Collection (Alumni)</collection><collection>ABI/INFORM Global (Corporate)</collection><collection>ProQuest Central Student</collection><collection>Research Library Prep</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Computer Science Collection</collection><collection>ProQuest Business Collection (Alumni Edition)</collection><collection>ProQuest Business Collection</collection><collection>Computer Science Database</collection><collection>ABI/INFORM Professional Advanced</collection><collection>ProQuest Engineering Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>ABI/INFORM Global</collection><collection>Computing Database</collection><collection>Research Library</collection><collection>Science Database</collection><collection>Engineering Database</collection><collection>Research Library (Corporate)</collection><collection>Advanced Technologies &amp; Aerospace Database</collection><collection>ProQuest Advanced Technologies &amp; Aerospace Collection</collection><collection>ProQuest One Business</collection><collection>ProQuest One Business (Alumni)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>Engineering Collection</collection><collection>ProQuest Central Basic</collection><collection>Hyper Article en Ligne (HAL)</collection><collection>Hyper Article en Ligne (HAL) (Open Access)</collection><jtitle>Journal of global optimization</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Toure, Cheikh</au><au>Auger, Anne</au><au>Hansen, Nikolaus</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Global linear convergence of evolution strategies with recombination on scaling-invariant functions</atitle><jtitle>Journal of global optimization</jtitle><stitle>J Glob Optim</stitle><date>2023-05-01</date><risdate>2023</risdate><volume>86</volume><issue>1</issue><spage>163</spage><epage>203</epage><pages>163-203</pages><issn>0925-5001</issn><eissn>1573-2916</eissn><abstract>Evolution Strategies (ESs) are stochastic derivative-free optimization algorithms whose most prominent representative, the CMA-ES algorithm, is widely used to solve difficult numerical optimization problems. We provide the first rigorous investigation of the linear convergence of step-size adaptive ESs involving a population and recombination, two ingredients crucially important in practice to be robust to local irregularities or multimodality. We investigate the convergence of step-size adaptive ESs with weighted recombination on composites of strictly increasing functions with continuously differentiable scaling-invariant functions with a global optimum. This function class includes functions with non-convex sublevel sets and discontinuous functions. We prove the existence of a constant r such that the logarithm of the distance to the optimum divided by the number of iterations converges to r . The constant is given as an expectation with respect to the stationary distribution of a Markov chain—its sign allows to infer linear convergence or divergence of the ES and is found numerically. Our main condition for convergence is the increase of the expected log step-size on linear functions. In contrast to previous results, our condition is equivalent to the almost sure geometric divergence of the step-size on linear functions.</abstract><cop>New York</cop><pub>Springer US</pub><doi>10.1007/s10898-022-01249-6</doi><tpages>41</tpages><orcidid>https://orcid.org/0000-0001-7788-4906</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 0925-5001
ispartof Journal of global optimization, 2023-05, Vol.86 (1), p.163-203
issn 0925-5001
1573-2916
language eng
recordid cdi_hal_primary_oai_HAL_hal_03286037v3
source SpringerLink Journals - AutoHoldings
subjects Adaptation
Algorithms
Computer Science
Convergence
Divergence
Evolution
Invariants
Linear functions
Markov analysis
Markov chains
Markov processes
Mathematical optimization
Mathematics
Mathematics and Statistics
Normal distribution
Operations Research/Decision Theory
Optimization
Optimization algorithms
Optimization and Control
Probability
Real Functions
Robustness (mathematics)
Stochastic models
title Global linear convergence of evolution strategies with recombination on scaling-invariant functions
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-09T16%3A40%3A01IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-gale_hal_p&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Global%20linear%20convergence%20of%20evolution%20strategies%20with%20recombination%20on%20scaling-invariant%20functions&rft.jtitle=Journal%20of%20global%20optimization&rft.au=Toure,%20Cheikh&rft.date=2023-05-01&rft.volume=86&rft.issue=1&rft.spage=163&rft.epage=203&rft.pages=163-203&rft.issn=0925-5001&rft.eissn=1573-2916&rft_id=info:doi/10.1007/s10898-022-01249-6&rft_dat=%3Cgale_hal_p%3EA746440968%3C/gale_hal_p%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2803706406&rft_id=info:pmid/&rft_galeid=A746440968&rfr_iscdi=true