NeuroEvoBench: Benchmarking Evolutionary Optimizers for Deep Learning Applications
Recently, the Deep Learning community has become interested in evolutionary optimization (EO) as a means to address hard optimization problems, e.g. meta-learning through long inner loop unrolls or optimizing non-differentiable operators. One core reason for this trend has been the recent innovation...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Recently, the Deep Learning community has become interested in evolutionary
optimization (EO) as a means to address hard optimization problems, e.g.
meta-learning through long inner loop unrolls or optimizing non-differentiable
operators. One core reason for this trend has been the recent innovation in
hardware acceleration and compatible software - making distributed population
evaluations much easier than before. Unlike for gradient descent-based methods
though, there is a lack of hyperparameter understanding and best practices for
EO - arguably due to severely less 'graduate student descent' and benchmarking
being performed for EO methods. Additionally, classical benchmarks from the
evolutionary community provide few practical insights for Deep Learning
applications. This poses challenges for newcomers to hardware-accelerated EO
and hinders significant adoption. Hence, we establish a new benchmark of EO
methods (NeuroEvoBench) tailored toward Deep Learning applications and
exhaustively evaluate traditional and meta-learned EO. We investigate core
scientific questions including resource allocation, fitness shaping,
normalization, regularization & scalability of EO. The benchmark is
open-sourced at https://github.com/neuroevobench/neuroevobench under Apache-2.0
license. |
---|---|
DOI: | 10.48550/arxiv.2311.02394 |