Comparing Rewinding and Fine-tuning in Neural Network Pruning
ICLR 2020 Many neural network pruning algorithms proceed in three steps: train the network to completion, remove unwanted structure to compress the network, and retrain the remaining structure to recover lost accuracy. The standard retraining technique, fine-tuning, trains the unpruned weights from...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | ICLR 2020 Many neural network pruning algorithms proceed in three steps: train the
network to completion, remove unwanted structure to compress the network, and
retrain the remaining structure to recover lost accuracy. The standard
retraining technique, fine-tuning, trains the unpruned weights from their final
trained values using a small fixed learning rate. In this paper, we compare
fine-tuning to alternative retraining techniques. Weight rewinding (as proposed
by Frankle et al., (2019)), rewinds unpruned weights to their values from
earlier in training and retrains them from there using the original training
schedule. Learning rate rewinding (which we propose) trains the unpruned
weights from their final values using the same learning rate schedule as weight
rewinding. Both rewinding techniques outperform fine-tuning, forming the basis
of a network-agnostic pruning algorithm that matches the accuracy and
compression ratios of several more network-specific state-of-the-art
techniques. |
---|---|
DOI: | 10.48550/arxiv.2003.02389 |