Gabor Filter Assisted Energy Efficient Fast Learning Convolutional Neural Networks

Convolutional Neural Networks (CNN) are being increasingly used in computer vision for a wide range of classification and recognition problems. However, training these large networks demands high computational time and energy requirements; hence, their energy-efficient implementation is of great int...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2017-05
Hauptverfasser: Syed Shakib Sarwar, Panda, Priyadarshini, Kaushik, Roy
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Convolutional Neural Networks (CNN) are being increasingly used in computer vision for a wide range of classification and recognition problems. However, training these large networks demands high computational time and energy requirements; hence, their energy-efficient implementation is of great interest. In this work, we reduce the training complexity of CNNs by replacing certain weight kernels of a CNN with Gabor filters. The convolutional layers use the Gabor filters as fixed weight kernels, which extracts intrinsic features, with regular trainable weight kernels. This combination creates a balanced system that gives better training performance in terms of energy and time, compared to the standalone CNN (without any Gabor kernels), in exchange for tolerable accuracy degradation. We show that the accuracy degradation can be mitigated by partially training the Gabor kernels, for a small fraction of the total training cycles. We evaluated the proposed approach on 4 benchmark applications. Simple tasks like face detection and character recognition (MNIST and TiCH), were implemented using LeNet architecture. While a more complex task of object recognition (CIFAR10) was implemented on a state of the art deep CNN (Network in Network) architecture. The proposed approach yields 1.31-1.53x improvement in training energy in comparison to conventional CNN implementation. We also obtain improvement up to 1.4x in training time, up to 2.23x in storage requirements, and up to 2.2x in memory access energy. The accuracy degradation suffered by the approximate implementations is within 0-3% of the baseline.
ISSN:2331-8422
DOI:10.48550/arxiv.1705.04748