Re-configurable parallel Feed-Forward Neural Network implementation using FPGA
This paper proposes a novel hardware architecture for a Feed-Forward Neural Network (FFNN) with the objective of minimizing the number of execution clock cycles needed for the network’s computation. The proposed architecture depends mainly on using two physical layers that are multiplexed and reused...
Gespeichert in:
Veröffentlicht in: | Integration (Amsterdam) 2024-07, Vol.97, p.102176, Article 102176 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | This paper proposes a novel hardware architecture for a Feed-Forward Neural Network (FFNN) with the objective of minimizing the number of execution clock cycles needed for the network’s computation. The proposed architecture depends mainly on using two physical layers that are multiplexed and reused during the computation of the FFNN to achieve an efficient parallel design. Two physical layers are designed to handle the computation of different sizes of Neural Networks (NN). The proposed FFNN architecture hardware resources are independent of the NN’s number of layers, instead, they depend only on the number of neurons in the largest layer. This versatile architecture serves as an accelerator in Deep Neural Network (DNN) computations as it exploits parallelism by making the two physical layers work in parallel through the computations. The proposed implementation was implemented with 18-bit fixed point representation reaching 200 MHz clock speed on a Spartan7 FPGA. Furthermore, the proposed architecture achieves a lower neuron computation factor compared to previous works in the literature.
[Display omitted] |
---|---|
ISSN: | 0167-9260 1872-7522 |
DOI: | 10.1016/j.vlsi.2024.102176 |