CSHE: network pruning by using cluster similarity and matrix eigenvalues

Although deep convolutional neural networks (CNNs) have achieved significant success in computer vision applications, the real-world deployment of CNNs is often limited by computing resources and memory constraints. As a mainstream deep model compression technology, neural network pruning offers a p...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:International journal of machine learning and cybernetics 2022-02, Vol.13 (2), p.371-382
Hauptverfasser: Shao, Mingwen, Dai, Junhui, Wang, Ran, Kuang, Jiandong, Zuo, Wangmeng
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Although deep convolutional neural networks (CNNs) have achieved significant success in computer vision applications, the real-world deployment of CNNs is often limited by computing resources and memory constraints. As a mainstream deep model compression technology, neural network pruning offers a promising prospect to reduce models’ parameters and calculation. In this paper, we proposed a novel filter pruning method that combines convolution filters and feature maps information for convolutional neural network compression, namely network pruning by using cluster similarity and large eigenvalues (CSHE). First, based on the convolution operation, we explore the similarity relationship of feature maps generated by the corresponding filters. Concretely, the clustering algorithm is used to classify the similarity of filter to guide the classification of feature map. Secondly, the proposed method utilizes the information of the large eigenvalues of the feature maps to rank the importance of filters. Finally, we prune the low-ranking filters and remain the high-ranking ones. The proposed method eliminates redundancy in convolution filters by applying large eigenvalues of feature maps based on filters similarity. In this way, most of the representative information in the network can be retained and the pruned results can be easily reproduced. Experiments show that the accuracy of the pruned sparse deep network obtained by the CSHE method in the classification tasks of CIFAR-10 and ImageNet ILSVRC-12 is almost the same as that of the reference network without any additional constraints.
ISSN:1868-8071
1868-808X
DOI:10.1007/s13042-021-01411-8