A self-scaling neural hardware structure that reduces the effect of some implementation errors

This paper explores a neural network hardware structure with distributed neurons that exhibits useful properties of self-scaling and averaging. In conventional sigmoidal neural networks with lumped neurons, the effects of weight errors and mismatches become more noticeable at the output as the netwo...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Djahanshahi, H., Ahmadi, M., Jullien, G.A., Miller, W.C.
Format: Tagungsbericht
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:This paper explores a neural network hardware structure with distributed neurons that exhibits useful properties of self-scaling and averaging. In conventional sigmoidal neural networks with lumped neurons, the effects of weight errors and mismatches become more noticeable at the output as the network becomes larger. It is shown here that based on a stochastic model the inherent scaling property of a distributed neuron structure controls the output noise (error) to signal ratio as the number of inputs to an Adaline increases. Moreover, the averaging effect of distributed elements minimizes characteristic variations among neurons. These properties altogether provides a robust hybrid hardware with digital synaptic weights and analog neurons. A VLSI realization and an application of this neural structure are explained.
ISSN:1089-3555
2379-2329
DOI:10.1109/NNSP.1997.622441