Neuromorphic Time-Multiplexed Reservoir Computing With On-the-Fly Weight Generation for Edge Devices

The human brain has evolved to perform complex and computationally expensive cognitive tasks, such as audio-visual perception and object detection, with ease. For instance, the brain can recognize speech in different dialects and perform other cognitive tasks, such as attention, memory, and motor co...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transaction on neural networks and learning systems 2022-06, Vol.33 (6), p.2676-2685
Hauptverfasser: Gupta, Sarthak, Chakraborty, Satrajit, Thakur, Chetan Singh
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:The human brain has evolved to perform complex and computationally expensive cognitive tasks, such as audio-visual perception and object detection, with ease. For instance, the brain can recognize speech in different dialects and perform other cognitive tasks, such as attention, memory, and motor control, with just 20 W of power consumption. Taking inspiration from neural systems, we propose a low-power neuromorphic hardware architecture to perform classification on temporal data at the edge. The proposed architecture uses a neuromorphic cochlea model for feature extraction and reservoir computing (RC) framework as a classifier. In the proposed hardware architecture, the RC framework is modified for on-the-fly generation of reservoir connectivity, along with binary feedforward and reservoir weights. Also, a large reservoir is split into multiple small reservoirs for efficient use of hardware resources. These modifications reduce the computational and memory resources required, thereby resulting in a lower power budget. The proposed classifier is validated for speech and human activity recognition (HAR) tasks. We have prototyped our hardware architecture using Intel's cyclone-10 low-power series field-programmable gate array (FPGA), consuming only 4790 logic elements (LEs) and 34.9-kB memory, making it a perfect candidate for edge computing applications. Moreover, we have implemented a complete system for speech recognition with the feature extraction block (cochlea model) and the proposed classifier, utilizing 15 532 LEs and 38.4-kB memory. By using the proposed idea of multiple small reservoirs along with on-the-fly generation of reservoir binary weights, our architecture can reduce the power consumption and memory requirement by order of magnitude compared to existing FPGA models for speech recognition tasks with similar complexity.
ISSN:2162-237X
2162-2388
DOI:10.1109/TNNLS.2021.3085165