Weak sub-network pruning for strong and efficient neural networks

Pruning methods to compress and accelerate deep convolutional neural networks (CNNs) have recently attracted growing attention, with the view of deploying pruned networks on resource-constrained hardware devices. However, most existing methods focus on small granularities, such as weight, kernel and...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Neural networks 2021-12, Vol.144, p.614-626
Hauptverfasser: Guo, Qingbei, Wu, Xiao-Jun, Kittler, Josef, Feng, Zhiquan
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Pruning methods to compress and accelerate deep convolutional neural networks (CNNs) have recently attracted growing attention, with the view of deploying pruned networks on resource-constrained hardware devices. However, most existing methods focus on small granularities, such as weight, kernel and filter, for the exploration of pruning. Thus, it will be bound to iteratively prune the whole neural networks based on those small granularities for high compression ratio with little performance loss. To address these issues, we theoretically analyze the relationship between the activation and gradient sparsity, and the channel saliency. Based on our findings, we propose a novel and effective method of weak sub-network pruning (WSP). Specifically, for a well-trained network model, we divide the whole compression process into two non-iterative stages. The first stage is to directly obtain a strong sub-network by pruning the weakest one. We first identify the less important channels from all the layers and determine the weakest sub-network, whereby each selected channel makes a minimal contribution to both the feed-forward and feed-backward processes. Then, a one-shot pruning strategy is executed to form a strong sub-network enabling fine tuning, while significantly reducing the impact of the network depth and width on the compression efficiency, especially for deep and wide network architectures. The second stage is to globally fine-tune the strong sub-network using several epochs to restore its original recognition accuracy. Furthermore, our proposed method impacts on the fully-connected layers as well as the convolutional layers for simultaneous compression and acceleration. Comprehensive experiments on VGG16 and ResNet-50 involving a variety of popular benchmarks, such as ImageNet-1K, CIFAR-10, CUB-200 and PASCAL VOC, demonstrate that our WSP method achieves superior performance on classification, domain adaption and object detection tasks with small model size. Our source code is available at https://github.com/QingbeiGuo/WSP.git.
ISSN:0893-6080
1879-2782
DOI:10.1016/j.neunet.2021.09.015