Benchmarking Neural Network Training Algorithms
Training algorithms, broadly construed, are an essential part of every deep learning pipeline. Training algorithm improvements that speed up training across a wide variety of workloads (e.g., better update rules, tuning protocols, learning rate schedules, or data selection schemes) could save time,...
Gespeichert in:
Hauptverfasser: | , , , , , , , , , , , , , , , , , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Training algorithms, broadly construed, are an essential part of every deep
learning pipeline. Training algorithm improvements that speed up training
across a wide variety of workloads (e.g., better update rules, tuning
protocols, learning rate schedules, or data selection schemes) could save time,
save computational resources, and lead to better, more accurate, models.
Unfortunately, as a community, we are currently unable to reliably identify
training algorithm improvements, or even determine the state-of-the-art
training algorithm. In this work, using concrete experiments, we argue that
real progress in speeding up training requires new benchmarks that resolve
three basic challenges faced by empirical comparisons of training algorithms:
(1) how to decide when training is complete and precisely measure training
time, (2) how to handle the sensitivity of measurements to exact workload
details, and (3) how to fairly compare algorithms that require hyperparameter
tuning. In order to address these challenges, we introduce a new, competitive,
time-to-result benchmark using multiple workloads running on fixed hardware,
the AlgoPerf: Training Algorithms benchmark. Our benchmark includes a set of
workload variants that make it possible to detect benchmark submissions that
are more robust to workload changes than current widely-used methods. Finally,
we evaluate baseline submissions constructed using various optimizers that
represent current practice, as well as other optimizers that have recently
received attention in the literature. These baseline results collectively
demonstrate the feasibility of our benchmark, show that non-trivial gaps
between methods exist, and set a provisional state-of-the-art for future
benchmark submissions to try and surpass. |
---|---|
DOI: | 10.48550/arxiv.2306.07179 |