On the Performance of Differential Evolution for Hyperparameter Tuning
Automated hyperparameter tuning aspires to facilitate the application of machine learning for non-experts. In the literature, different optimization approaches are applied for that purpose. This paper investigates the performance of Differential Evolution for tuning hyperparameters of supervised lea...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Automated hyperparameter tuning aspires to facilitate the application of
machine learning for non-experts. In the literature, different optimization
approaches are applied for that purpose. This paper investigates the
performance of Differential Evolution for tuning hyperparameters of supervised
learning algorithms for classification tasks. This empirical study involves a
range of different machine learning algorithms and datasets with various
characteristics to compare the performance of Differential Evolution with
Sequential Model-based Algorithm Configuration (SMAC), a reference Bayesian
Optimization approach. The results indicate that Differential Evolution
outperforms SMAC for most datasets when tuning a given machine learning
algorithm - particularly when breaking ties in a first-to-report fashion. Only
for the tightest of computational budgets SMAC performs better. On small
datasets, Differential Evolution outperforms SMAC by 19% (37% after
tie-breaking). In a second experiment across a range of representative datasets
taken from the literature, Differential Evolution scores 15% (23% after
tie-breaking) more wins than SMAC. |
---|---|
DOI: | 10.48550/arxiv.1904.06960 |