RISC-V Toolchain and Agile Development based Open-source Neuromorphic Processor
In recent decades, neuromorphic computing aiming to imitate brains' behaviors has been developed in various fields of computer science. The Artificial Neural Network (ANN) is an important concept in Artificial Intelligence (AI). It is utilized in recognition and classification. To explore a bet...
Gespeichert in:
Hauptverfasser: | , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | In recent decades, neuromorphic computing aiming to imitate brains' behaviors
has been developed in various fields of computer science. The Artificial Neural
Network (ANN) is an important concept in Artificial Intelligence (AI). It is
utilized in recognition and classification. To explore a better way to simulate
obtained brain behaviors, which is fast and energy-efficient, on hardware,
researchers need an advanced method such as neuromorphic computing. In this
case, Spiking Neural Network (SNN) becomes an optimal choice in hardware
implementation. Recent works are focusing on accelerating SNN computing.
However, most accelerator solutions are based on CPU-accelerator architecture
which is energy-inefficient due to the complex control flows in this structure.
This paper proposes Wenquxing 22A, a low-power neuromorphic processor that
combines general-purpose CPU functions and SNN to efficiently compute it with
RISC-V SNN extension instructions. The main idea of Wenquxing 22A is to
integrate the SNN calculation unit into the pipeline of a general-purpose CPU
to achieve low-power computing with customized RISC-V SNN instructions version
1.0 (RV-SNN V1.0), Streamlined Leaky Integrate-and-Fire (LIF) model, and the
binary stochastic Spike-timing-dependent-plasticity (STDP). The source code of
Wenquxing 22A is released online on Gitee and GitHub. We apply Wenquxing 22A to
the recognition of the MNIST dataset to make a comparison with other SNN
systems. Our experiment results show that Wenquxing 22A improves the energy
expenses by 5.13 times over the accelerator solution, ODIN, with approximately
classification accuracy, 85.00% for 3-bit ODIN online learning, and 91.91% for
1-bit Wenquxing 22A. |
---|---|
DOI: | 10.48550/arxiv.2210.00562 |