Towards Efficient Neural Networks On-a-chip: Joint Hardware-Algorithm Approaches
Machine learning algorithms have made significant advances in many applications. However, their hardware implementation on the state-of-the-art platforms still faces several challenges and are limited by various factors, such as memory volume, memory bandwidth and interconnection overhead. The adopt...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Machine learning algorithms have made significant advances in many
applications. However, their hardware implementation on the state-of-the-art
platforms still faces several challenges and are limited by various factors,
such as memory volume, memory bandwidth and interconnection overhead. The
adoption of the crossbar architecture with emerging memory technology partially
solves the problem but induces process variation and other concerns. In this
paper, we will present novel solutions to two fundamental issues in crossbar
implementation of Artificial Intelligence (AI) algorithms: device variation and
insufficient interconnections. These solutions are inspired by the statistical
properties of algorithms themselves, especially the redundancy in neural
network nodes and connections. By Random Sparse Adaptation and pruning the
connections following the Small-World model, we demonstrate robust and
efficient performance on representative datasets such as MNIST and CIFAR-10.
Moreover, we present Continuous Growth and Pruning algorithm for future
learning and adaptation on hardware. |
---|---|
DOI: | 10.48550/arxiv.1906.08866 |