Black-Box Optimization Revisited: Improving Algorithm Selection Wizards through Massive Benchmarking

Existing studies in black-box optimization for machine learning suffer from low generalizability, caused by a typically selective choice of problem instances used for training and testing different optimization algorithms. Among other issues, this practice promotes overfitting and poor-performing us...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2021-02
Hauptverfasser: Meunier, Laurent, Rakotoarison, Herilalaina, Pak Kan Wong, Roziere, Baptiste, Rapin, Jeremy, Teytaud, Olivier, Moreau, Antoine, Doerr, Carola
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Meunier, Laurent
Rakotoarison, Herilalaina
Pak Kan Wong
Roziere, Baptiste
Rapin, Jeremy
Teytaud, Olivier
Moreau, Antoine
Doerr, Carola
description Existing studies in black-box optimization for machine learning suffer from low generalizability, caused by a typically selective choice of problem instances used for training and testing different optimization algorithms. Among other issues, this practice promotes overfitting and poor-performing user guidelines. To address this shortcoming, we propose in this work a benchmark suite, OptimSuite, which covers a broad range of black-box optimization problems, ranging from academic benchmarks to real-world applications, from discrete over numerical to mixed-integer problems, from small to very large-scale problems, from noisy over dynamic to static problems, etc. We demonstrate the advantages of such a broad collection by deriving from it Automated Black Box Optimizer (ABBO), a general-purpose algorithm selection wizard. Using three different types of algorithm selection techniques, ABBO achieves competitive performance on all benchmark suites. It significantly outperforms previous state of the art on some of them, including YABBOB and LSGO. ABBO relies on many high-quality base components. Its excellent performance is obtained without any task-specific parametrization. The OptimSuite benchmark collection, the ABBO wizard and its base solvers have all been merged into the open-source Nevergrad platform, where they are available for reproducible research.
format Article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2450686141</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2450686141</sourcerecordid><originalsourceid>FETCH-proquest_journals_24506861413</originalsourceid><addsrcrecordid>eNqNjNEKgjAYRkcQJOU7_NC1MDc16S6jqIsIKuhShi6dTmfblOjpk-gBuvouzjnfBDmEUt-LA0JmyDWmwhiTaEXCkDooTyTLai9RLzh3VjTizaxQLVz4IIywPF_Dsem0GkRbwEYWSgtbNnDlkmdf8T4WOjdgS636ooQTM0YMHBLeZmXDdD2GCzR9MGm4-9s5Wu53t-3BG4-fPTc2rVSv2xGlJAhxFEd-4NP_rA9uxUcE</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2450686141</pqid></control><display><type>article</type><title>Black-Box Optimization Revisited: Improving Algorithm Selection Wizards through Massive Benchmarking</title><source>Free E- Journals</source><creator>Meunier, Laurent ; Rakotoarison, Herilalaina ; Pak Kan Wong ; Roziere, Baptiste ; Rapin, Jeremy ; Teytaud, Olivier ; Moreau, Antoine ; Doerr, Carola</creator><creatorcontrib>Meunier, Laurent ; Rakotoarison, Herilalaina ; Pak Kan Wong ; Roziere, Baptiste ; Rapin, Jeremy ; Teytaud, Olivier ; Moreau, Antoine ; Doerr, Carola</creatorcontrib><description>Existing studies in black-box optimization for machine learning suffer from low generalizability, caused by a typically selective choice of problem instances used for training and testing different optimization algorithms. Among other issues, this practice promotes overfitting and poor-performing user guidelines. To address this shortcoming, we propose in this work a benchmark suite, OptimSuite, which covers a broad range of black-box optimization problems, ranging from academic benchmarks to real-world applications, from discrete over numerical to mixed-integer problems, from small to very large-scale problems, from noisy over dynamic to static problems, etc. We demonstrate the advantages of such a broad collection by deriving from it Automated Black Box Optimizer (ABBO), a general-purpose algorithm selection wizard. Using three different types of algorithm selection techniques, ABBO achieves competitive performance on all benchmark suites. It significantly outperforms previous state of the art on some of them, including YABBOB and LSGO. ABBO relies on many high-quality base components. Its excellent performance is obtained without any task-specific parametrization. The OptimSuite benchmark collection, the ABBO wizard and its base solvers have all been merged into the open-source Nevergrad platform, where they are available for reproducible research.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Algorithms ; Benchmarks ; Collection ; Mixed integer ; Optimization ; Parameterization ; Solvers</subject><ispartof>arXiv.org, 2021-02</ispartof><rights>2021. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>776,780</link.rule.ids></links><search><creatorcontrib>Meunier, Laurent</creatorcontrib><creatorcontrib>Rakotoarison, Herilalaina</creatorcontrib><creatorcontrib>Pak Kan Wong</creatorcontrib><creatorcontrib>Roziere, Baptiste</creatorcontrib><creatorcontrib>Rapin, Jeremy</creatorcontrib><creatorcontrib>Teytaud, Olivier</creatorcontrib><creatorcontrib>Moreau, Antoine</creatorcontrib><creatorcontrib>Doerr, Carola</creatorcontrib><title>Black-Box Optimization Revisited: Improving Algorithm Selection Wizards through Massive Benchmarking</title><title>arXiv.org</title><description>Existing studies in black-box optimization for machine learning suffer from low generalizability, caused by a typically selective choice of problem instances used for training and testing different optimization algorithms. Among other issues, this practice promotes overfitting and poor-performing user guidelines. To address this shortcoming, we propose in this work a benchmark suite, OptimSuite, which covers a broad range of black-box optimization problems, ranging from academic benchmarks to real-world applications, from discrete over numerical to mixed-integer problems, from small to very large-scale problems, from noisy over dynamic to static problems, etc. We demonstrate the advantages of such a broad collection by deriving from it Automated Black Box Optimizer (ABBO), a general-purpose algorithm selection wizard. Using three different types of algorithm selection techniques, ABBO achieves competitive performance on all benchmark suites. It significantly outperforms previous state of the art on some of them, including YABBOB and LSGO. ABBO relies on many high-quality base components. Its excellent performance is obtained without any task-specific parametrization. The OptimSuite benchmark collection, the ABBO wizard and its base solvers have all been merged into the open-source Nevergrad platform, where they are available for reproducible research.</description><subject>Algorithms</subject><subject>Benchmarks</subject><subject>Collection</subject><subject>Mixed integer</subject><subject>Optimization</subject><subject>Parameterization</subject><subject>Solvers</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>BENPR</sourceid><recordid>eNqNjNEKgjAYRkcQJOU7_NC1MDc16S6jqIsIKuhShi6dTmfblOjpk-gBuvouzjnfBDmEUt-LA0JmyDWmwhiTaEXCkDooTyTLai9RLzh3VjTizaxQLVz4IIywPF_Dsem0GkRbwEYWSgtbNnDlkmdf8T4WOjdgS636ooQTM0YMHBLeZmXDdD2GCzR9MGm4-9s5Wu53t-3BG4-fPTc2rVSv2xGlJAhxFEd-4NP_rA9uxUcE</recordid><startdate>20210223</startdate><enddate>20210223</enddate><creator>Meunier, Laurent</creator><creator>Rakotoarison, Herilalaina</creator><creator>Pak Kan Wong</creator><creator>Roziere, Baptiste</creator><creator>Rapin, Jeremy</creator><creator>Teytaud, Olivier</creator><creator>Moreau, Antoine</creator><creator>Doerr, Carola</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20210223</creationdate><title>Black-Box Optimization Revisited: Improving Algorithm Selection Wizards through Massive Benchmarking</title><author>Meunier, Laurent ; Rakotoarison, Herilalaina ; Pak Kan Wong ; Roziere, Baptiste ; Rapin, Jeremy ; Teytaud, Olivier ; Moreau, Antoine ; Doerr, Carola</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_24506861413</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Algorithms</topic><topic>Benchmarks</topic><topic>Collection</topic><topic>Mixed integer</topic><topic>Optimization</topic><topic>Parameterization</topic><topic>Solvers</topic><toplevel>online_resources</toplevel><creatorcontrib>Meunier, Laurent</creatorcontrib><creatorcontrib>Rakotoarison, Herilalaina</creatorcontrib><creatorcontrib>Pak Kan Wong</creatorcontrib><creatorcontrib>Roziere, Baptiste</creatorcontrib><creatorcontrib>Rapin, Jeremy</creatorcontrib><creatorcontrib>Teytaud, Olivier</creatorcontrib><creatorcontrib>Moreau, Antoine</creatorcontrib><creatorcontrib>Doerr, Carola</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection (ProQuest)</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Meunier, Laurent</au><au>Rakotoarison, Herilalaina</au><au>Pak Kan Wong</au><au>Roziere, Baptiste</au><au>Rapin, Jeremy</au><au>Teytaud, Olivier</au><au>Moreau, Antoine</au><au>Doerr, Carola</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Black-Box Optimization Revisited: Improving Algorithm Selection Wizards through Massive Benchmarking</atitle><jtitle>arXiv.org</jtitle><date>2021-02-23</date><risdate>2021</risdate><eissn>2331-8422</eissn><abstract>Existing studies in black-box optimization for machine learning suffer from low generalizability, caused by a typically selective choice of problem instances used for training and testing different optimization algorithms. Among other issues, this practice promotes overfitting and poor-performing user guidelines. To address this shortcoming, we propose in this work a benchmark suite, OptimSuite, which covers a broad range of black-box optimization problems, ranging from academic benchmarks to real-world applications, from discrete over numerical to mixed-integer problems, from small to very large-scale problems, from noisy over dynamic to static problems, etc. We demonstrate the advantages of such a broad collection by deriving from it Automated Black Box Optimizer (ABBO), a general-purpose algorithm selection wizard. Using three different types of algorithm selection techniques, ABBO achieves competitive performance on all benchmark suites. It significantly outperforms previous state of the art on some of them, including YABBOB and LSGO. ABBO relies on many high-quality base components. Its excellent performance is obtained without any task-specific parametrization. The OptimSuite benchmark collection, the ABBO wizard and its base solvers have all been merged into the open-source Nevergrad platform, where they are available for reproducible research.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2021-02
issn 2331-8422
language eng
recordid cdi_proquest_journals_2450686141
source Free E- Journals
subjects Algorithms
Benchmarks
Collection
Mixed integer
Optimization
Parameterization
Solvers
title Black-Box Optimization Revisited: Improving Algorithm Selection Wizards through Massive Benchmarking
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-31T08%3A03%3A11IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Black-Box%20Optimization%20Revisited:%20Improving%20Algorithm%20Selection%20Wizards%20through%20Massive%20Benchmarking&rft.jtitle=arXiv.org&rft.au=Meunier,%20Laurent&rft.date=2021-02-23&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2450686141%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2450686141&rft_id=info:pmid/&rfr_iscdi=true