Neural networks, linear functions and neglected non-linearity

The multiplicity of approximation theorems for Neural Networks do not relate to approximation of linear functions per se. The problem for the network is to construct a linear function by superpositions of non-linear activation functions such as the sigmoid function. This issue is important for appli...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Computational management science 2003-12, Vol.1 (1), p.15-29
Hauptverfasser: Curry, B., Morgan, P.H.
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:The multiplicity of approximation theorems for Neural Networks do not relate to approximation of linear functions per se. The problem for the network is to construct a linear function by superpositions of non-linear activation functions such as the sigmoid function. This issue is important for applications of NNs in statistical tests for neglected nonlinearity, where it is common practice to include a linear function through skip-layer connections. Our theoretical analysis and evidence point in a similar direction, suggesting that the network can in fact provide linear approximations without additional 'assistance'. Our paper suggests that skip-layer connections are unnecessary, and if employed could lead to misleading results.[PUBLICATION ABSTRACT]
ISSN:1619-697X
1619-6988
DOI:10.1007/s10287-003-0003-4