Structural Pruning in Deep Neural Networks: A Small-World Approach
Deep Neural Networks (DNNs) are usually over-parameterized, causing excessive memory and interconnection cost on the hardware platform. Existing pruning approaches remove secondary parameters at the end of training to reduce the model size; but without exploiting the intrinsic network property, they...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Deep Neural Networks (DNNs) are usually over-parameterized, causing excessive
memory and interconnection cost on the hardware platform. Existing pruning
approaches remove secondary parameters at the end of training to reduce the
model size; but without exploiting the intrinsic network property, they still
require the full interconnection to prepare the network. Inspired by the
observation that brain networks follow the Small-World model, we propose a
novel structural pruning scheme, which includes (1) hierarchically trimming the
network into a Small-World model before training, (2) training the network for
a given dataset, and (3) optimizing the network for accuracy. The new scheme
effectively reduces both the model size and the interconnection needed before
training, achieving a locally clustered and globally sparse model. We
demonstrate our approach on LeNet-5 for MNIST and VGG-16 for CIFAR-10,
decreasing the number of parameters to 2.3% and 9.02% of the baseline model,
respectively. |
---|---|
DOI: | 10.48550/arxiv.1911.04453 |