Modulating Regularization Frequency for Efficient Compression-Aware Model Training
While model compression is increasingly important because of large neural network size, compression-aware training is challenging as it needs sophisticated model modifications and longer training time.In this paper, we introduce regularization frequency (i.e., how often compression is performed duri...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | While model compression is increasingly important because of large neural
network size, compression-aware training is challenging as it needs
sophisticated model modifications and longer training time.In this paper, we
introduce regularization frequency (i.e., how often compression is performed
during training) as a new regularization technique for a practical and
efficient compression-aware training method. For various regularization
techniques, such as weight decay and dropout, optimizing the regularization
strength is crucial to improve generalization in Deep Neural Networks (DNNs).
While model compression also demands the right amount of regularization, the
regularization strength incurred by model compression has been controlled only
by compression ratio. Throughout various experiments, we show that
regularization frequency critically affects the regularization strength of
model compression. Combining regularization frequency and compression ratio,
the amount of weight updates by model compression per mini-batch can be
optimized to achieve the best model accuracy. Modulating regularization
frequency is implemented by occasional model compression while conventional
compression-aware training is usually performed for every mini-batch. |
---|---|
DOI: | 10.48550/arxiv.2105.01875 |