Convolution neural network with low operation FLOPS and high accuracy for image recognition

The convolution neural network makes deeper and wider for better accuracy, but requires higher computations. When the neural network goes deeper, some information loss is more. To improve this drawback, the residual structure was developed to connect the information of the previous layers. This is a...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Journal of real-time image processing 2021-08, Vol.18 (4), p.1309-1319
Hauptverfasser: Hsia, Shih-Chang, Wang, Szu-Hong, Chang, Chuan-Yu
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:The convolution neural network makes deeper and wider for better accuracy, but requires higher computations. When the neural network goes deeper, some information loss is more. To improve this drawback, the residual structure was developed to connect the information of the previous layers. This is a good solution to prevent the loss of information, but it requires a huge amount of parameters for deeper layer operations. In this study, the fast computational algorithm is proposed to reduce the parameters and to save the operations with the modification of DenseNet deep layer block. With channel merging procedures, this solution can reduce the dilemma of multiple growth of the parameter quantity for deeper layer. This approach is not only to reduce the parameters and FLOPs, but also to keep high accuracy. Comparisons with the original DenseNet and RetNet-110, the parameters can be efficiency reduced about 30–70%, while the accuracy degrades little. The lightweight network can be implemented on a low-cost embedded system for real-time application.
ISSN:1861-8200
1861-8219
DOI:10.1007/s11554-021-01140-9