Back propagation simulations using limited precision calculations

The precision required for neural net algorithms is an important question facing hardware architects. The authors present simulation results that compare floating point and limited precision integer back-propagation simulators. Data sets from the neural network benchmark suite maintained by Carnegie...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Holt, J.L., Baker, T.E.
Format: Tagungsbericht
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:The precision required for neural net algorithms is an important question facing hardware architects. The authors present simulation results that compare floating point and limited precision integer back-propagation simulators. Data sets from the neural network benchmark suite maintained by Carnegie Mellon University were used to compare integer and floating point implementations. The simulation results indicate that integer computation works quite well for the back-propagation algorithm. In all cases except one, the limited precision integer simulations performed as well as the floating point simulations. The effect of reducing the precision of the trained weights is also reported.< >
DOI:10.1109/IJCNN.1991.155324