NeuroEvoBench: Benchmarking Evolutionary Optimizers for Deep Learning Applications
Recently, the Deep Learning community has become interested in evolutionary optimization (EO) as a means to address hard optimization problems, e.g. meta-learning through long inner loop unrolls or optimizing non-differentiable operators. One core reason for this trend has been the recent innovation...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Lange, Robert Tjarko Tang, Yujin Tian, Yingtao |
description | Recently, the Deep Learning community has become interested in evolutionary
optimization (EO) as a means to address hard optimization problems, e.g.
meta-learning through long inner loop unrolls or optimizing non-differentiable
operators. One core reason for this trend has been the recent innovation in
hardware acceleration and compatible software - making distributed population
evaluations much easier than before. Unlike for gradient descent-based methods
though, there is a lack of hyperparameter understanding and best practices for
EO - arguably due to severely less 'graduate student descent' and benchmarking
being performed for EO methods. Additionally, classical benchmarks from the
evolutionary community provide few practical insights for Deep Learning
applications. This poses challenges for newcomers to hardware-accelerated EO
and hinders significant adoption. Hence, we establish a new benchmark of EO
methods (NeuroEvoBench) tailored toward Deep Learning applications and
exhaustively evaluate traditional and meta-learned EO. We investigate core
scientific questions including resource allocation, fitness shaping,
normalization, regularization & scalability of EO. The benchmark is
open-sourced at https://github.com/neuroevobench/neuroevobench under Apache-2.0
license. |
doi_str_mv | 10.48550/arxiv.2311.02394 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2311_02394</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2311_02394</sourcerecordid><originalsourceid>FETCH-LOGICAL-a674-874624922aa14b02c1edeb30f483378989d2054e46be648ca8a92c7113626b8b3</originalsourceid><addsrcrecordid>eNotj8tOwzAURL1hgQofwAr_QIJfcWx2pZSHFFEJdR9duzetRZpYTltRvp4msDrSaDSaQ8gdZ7kyRcEeIH2HUy4k5zkT0qpr8vmBx9QvT_0Tdn73SCfsIX2FbksvcXs8hL6DdKareAj78INpoE2f6DNipBVC6sbmPMY2eBi7ww25aqAd8PafM7J-Wa4Xb1m1en1fzKsMdKkyUyotlBUCgCvHhOe4QSdZo4yUpbHGbgQrFCrtUCvjwYAVvuRcaqGdcXJG7v9mJ6k6pnC5fa5HuXqSk78aIEn5</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>NeuroEvoBench: Benchmarking Evolutionary Optimizers for Deep Learning Applications</title><source>arXiv.org</source><creator>Lange, Robert Tjarko ; Tang, Yujin ; Tian, Yingtao</creator><creatorcontrib>Lange, Robert Tjarko ; Tang, Yujin ; Tian, Yingtao</creatorcontrib><description>Recently, the Deep Learning community has become interested in evolutionary
optimization (EO) as a means to address hard optimization problems, e.g.
meta-learning through long inner loop unrolls or optimizing non-differentiable
operators. One core reason for this trend has been the recent innovation in
hardware acceleration and compatible software - making distributed population
evaluations much easier than before. Unlike for gradient descent-based methods
though, there is a lack of hyperparameter understanding and best practices for
EO - arguably due to severely less 'graduate student descent' and benchmarking
being performed for EO methods. Additionally, classical benchmarks from the
evolutionary community provide few practical insights for Deep Learning
applications. This poses challenges for newcomers to hardware-accelerated EO
and hinders significant adoption. Hence, we establish a new benchmark of EO
methods (NeuroEvoBench) tailored toward Deep Learning applications and
exhaustively evaluate traditional and meta-learned EO. We investigate core
scientific questions including resource allocation, fitness shaping,
normalization, regularization & scalability of EO. The benchmark is
open-sourced at https://github.com/neuroevobench/neuroevobench under Apache-2.0
license.</description><identifier>DOI: 10.48550/arxiv.2311.02394</identifier><language>eng</language><subject>Computer Science - Learning ; Computer Science - Neural and Evolutionary Computing</subject><creationdate>2023-11</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,777,882</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2311.02394$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2311.02394$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Lange, Robert Tjarko</creatorcontrib><creatorcontrib>Tang, Yujin</creatorcontrib><creatorcontrib>Tian, Yingtao</creatorcontrib><title>NeuroEvoBench: Benchmarking Evolutionary Optimizers for Deep Learning Applications</title><description>Recently, the Deep Learning community has become interested in evolutionary
optimization (EO) as a means to address hard optimization problems, e.g.
meta-learning through long inner loop unrolls or optimizing non-differentiable
operators. One core reason for this trend has been the recent innovation in
hardware acceleration and compatible software - making distributed population
evaluations much easier than before. Unlike for gradient descent-based methods
though, there is a lack of hyperparameter understanding and best practices for
EO - arguably due to severely less 'graduate student descent' and benchmarking
being performed for EO methods. Additionally, classical benchmarks from the
evolutionary community provide few practical insights for Deep Learning
applications. This poses challenges for newcomers to hardware-accelerated EO
and hinders significant adoption. Hence, we establish a new benchmark of EO
methods (NeuroEvoBench) tailored toward Deep Learning applications and
exhaustively evaluate traditional and meta-learned EO. We investigate core
scientific questions including resource allocation, fitness shaping,
normalization, regularization & scalability of EO. The benchmark is
open-sourced at https://github.com/neuroevobench/neuroevobench under Apache-2.0
license.</description><subject>Computer Science - Learning</subject><subject>Computer Science - Neural and Evolutionary Computing</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj8tOwzAURL1hgQofwAr_QIJfcWx2pZSHFFEJdR9duzetRZpYTltRvp4msDrSaDSaQ8gdZ7kyRcEeIH2HUy4k5zkT0qpr8vmBx9QvT_0Tdn73SCfsIX2FbksvcXs8hL6DdKareAj78INpoE2f6DNipBVC6sbmPMY2eBi7ww25aqAd8PafM7J-Wa4Xb1m1en1fzKsMdKkyUyotlBUCgCvHhOe4QSdZo4yUpbHGbgQrFCrtUCvjwYAVvuRcaqGdcXJG7v9mJ6k6pnC5fa5HuXqSk78aIEn5</recordid><startdate>20231104</startdate><enddate>20231104</enddate><creator>Lange, Robert Tjarko</creator><creator>Tang, Yujin</creator><creator>Tian, Yingtao</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20231104</creationdate><title>NeuroEvoBench: Benchmarking Evolutionary Optimizers for Deep Learning Applications</title><author>Lange, Robert Tjarko ; Tang, Yujin ; Tian, Yingtao</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a674-874624922aa14b02c1edeb30f483378989d2054e46be648ca8a92c7113626b8b3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Computer Science - Learning</topic><topic>Computer Science - Neural and Evolutionary Computing</topic><toplevel>online_resources</toplevel><creatorcontrib>Lange, Robert Tjarko</creatorcontrib><creatorcontrib>Tang, Yujin</creatorcontrib><creatorcontrib>Tian, Yingtao</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Lange, Robert Tjarko</au><au>Tang, Yujin</au><au>Tian, Yingtao</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>NeuroEvoBench: Benchmarking Evolutionary Optimizers for Deep Learning Applications</atitle><date>2023-11-04</date><risdate>2023</risdate><abstract>Recently, the Deep Learning community has become interested in evolutionary
optimization (EO) as a means to address hard optimization problems, e.g.
meta-learning through long inner loop unrolls or optimizing non-differentiable
operators. One core reason for this trend has been the recent innovation in
hardware acceleration and compatible software - making distributed population
evaluations much easier than before. Unlike for gradient descent-based methods
though, there is a lack of hyperparameter understanding and best practices for
EO - arguably due to severely less 'graduate student descent' and benchmarking
being performed for EO methods. Additionally, classical benchmarks from the
evolutionary community provide few practical insights for Deep Learning
applications. This poses challenges for newcomers to hardware-accelerated EO
and hinders significant adoption. Hence, we establish a new benchmark of EO
methods (NeuroEvoBench) tailored toward Deep Learning applications and
exhaustively evaluate traditional and meta-learned EO. We investigate core
scientific questions including resource allocation, fitness shaping,
normalization, regularization & scalability of EO. The benchmark is
open-sourced at https://github.com/neuroevobench/neuroevobench under Apache-2.0
license.</abstract><doi>10.48550/arxiv.2311.02394</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.2311.02394 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_2311_02394 |
source | arXiv.org |
subjects | Computer Science - Learning Computer Science - Neural and Evolutionary Computing |
title | NeuroEvoBench: Benchmarking Evolutionary Optimizers for Deep Learning Applications |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-19T15%3A26%3A44IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=NeuroEvoBench:%20Benchmarking%20Evolutionary%20Optimizers%20for%20Deep%20Learning%20Applications&rft.au=Lange,%20Robert%20Tjarko&rft.date=2023-11-04&rft_id=info:doi/10.48550/arxiv.2311.02394&rft_dat=%3Carxiv_GOX%3E2311_02394%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |