Winning solutions and post-challenge analyses of the ChaLearn AutoDL challenge 2019
IEEE Transactions on Pattern Analysis and Machine Intelligence (T-PAMI) 2021 This paper reports the results and post-challenge analyses of ChaLearn's AutoDL challenge series, which helped sorting out a profusion of AutoML solutions for Deep Learning (DL) that had been introduced in a variety of...
Gespeichert in:
Hauptverfasser: | , , , , , , , , , , , , , , , , , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | IEEE Transactions on Pattern Analysis and Machine Intelligence
(T-PAMI) 2021 This paper reports the results and post-challenge analyses of ChaLearn's
AutoDL challenge series, which helped sorting out a profusion of AutoML
solutions for Deep Learning (DL) that had been introduced in a variety of
settings, but lacked fair comparisons. All input data modalities (time series,
images, videos, text, tabular) were formatted as tensors and all tasks were
multi-label classification problems. Code submissions were executed on hidden
tasks, with limited time and computational resources, pushing solutions that
get results quickly. In this setting, DL methods dominated, though popular
Neural Architecture Search (NAS) was impractical. Solutions relied on
fine-tuned pre-trained networks, with architectures matching data modality.
Post-challenge tests did not reveal improvements beyond the imposed time limit.
While no component is particularly original or novel, a high level modular
organization emerged featuring a "meta-learner", "data ingestor", "model
selector", "model/learner", and "evaluator". This modularity enabled ablation
studies, which revealed the importance of (off-platform) meta-learning,
ensembling, and efficient data management. Experiments on heterogeneous module
combinations further confirm the (local) optimality of the winning solutions.
Our challenge legacy includes an ever-lasting benchmark
(http://autodl.chalearn.org), the open-sourced code of the winners, and a free
"AutoDL self-service". |
---|---|
DOI: | 10.48550/arxiv.2201.03801 |