Towards Budget-Driven Hardware Optimization for Deep Convolutional Neural Networks using Stochastic Computing
Recently, Deep Convolutional Neural Network (DCNN) has achieved tremendous success in many machine learning applications. Nevertheless, the deep structure has brought significant increases in computation complexity. Largescale deep learning systems mainly operate in high-performance server clusters,...
Gespeichert in:
Hauptverfasser: | , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Recently, Deep Convolutional Neural Network (DCNN) has achieved tremendous
success in many machine learning applications. Nevertheless, the deep structure
has brought significant increases in computation complexity. Largescale deep
learning systems mainly operate in high-performance server clusters, thus
restricting the application extensions to personal or mobile devices. Previous
works on GPU and/or FPGA acceleration for DCNNs show increasing speedup, but
ignore other constraints, such as area, power, and energy. Stochastic Computing
(SC), as a unique data representation and processing technique, has the
potential to enable the design of fully parallel and scalable hardware
implementations of large-scale deep learning systems. This paper proposed an
automatic design allocation algorithm driven by budget requirement considering
overall accuracy performance. This systematic method enables the automatic
design of a DCNN where all design parameters are jointly optimized.
Experimental results demonstrate that proposed algorithm can achieve a joint
optimization of all design parameters given the comprehensive budget of a DCNN. |
---|---|
DOI: | 10.48550/arxiv.1805.04142 |