Fixed-point roundoff error analysis of large feedforward neural networks
Digital implementations of neural nets must consider finite wordlength effects. For large sized nets, it is particularly important to investigate the roundoff errors in order to realize low-cost hardware implementations while satisfying precision constraints. This paper presents output error express...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Tagungsbericht |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Digital implementations of neural nets must consider finite wordlength effects. For large sized nets, it is particularly important to investigate the roundoff errors in order to realize low-cost hardware implementations while satisfying precision constraints. This paper presents output error expressions for a large feedforward neural net, which are based on statistical error analysis. Weight quantization errors as well as arithmetic errors due to rounding of multiplier output and sigmoid output are modeled. The results indicate that for equal wordlengths, multiplier roundoff errors exceed weight quantization errors by about an order of magnitude. |
---|---|
DOI: | 10.1109/IJCNN.1993.717037 |