Knowledge Distillation for Efficient Sequences of Training Runs
In many practical scenarios -- like hyperparameter search or continual retraining with new data -- related training runs are performed many times in sequence. Current practice is to train each of these models independently from scratch. We study the problem of exploiting the computation invested in...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | In many practical scenarios -- like hyperparameter search or continual
retraining with new data -- related training runs are performed many times in
sequence. Current practice is to train each of these models independently from
scratch. We study the problem of exploiting the computation invested in
previous runs to reduce the cost of future runs using knowledge distillation
(KD). We find that augmenting future runs with KD from previous runs
dramatically reduces the time necessary to train these models, even taking into
account the overhead of KD. We improve on these results with two strategies
that reduce the overhead of KD by 80-90% with minimal effect on accuracy and
vast pareto-improvements in overall cost. We conclude that KD is a promising
avenue for reducing the cost of the expensive preparatory work that precedes
training final models in practice. |
---|---|
DOI: | 10.48550/arxiv.2303.06480 |