The computational intractability of training sigmoidal neural networks
We demonstrate that the problem of approximately interpolating a target function by a neural network is computationally intractable. In particular the interpolation training problem for a neural network with two monotone Lipschitzian sigmoidal internal activation functions and one linear output node...
Gespeichert in:
Veröffentlicht in: | IEEE transactions on information theory 1997-01, Vol.43 (1), p.167-173 |
---|---|
1. Verfasser: | |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | We demonstrate that the problem of approximately interpolating a target function by a neural network is computationally intractable. In particular the interpolation training problem for a neural network with two monotone Lipschitzian sigmoidal internal activation functions and one linear output node is shown to be NP-hard and NP-complete if the internal nodes are in addition piecewise ratios of polynomials. This partially answers a question of Blum and Rivest (1992) concerning the NP-completeness of training a logistic sigmoidal 3-node network. An extension of the result is then given for networks with n monotone sigmoidal internal nodes and one convex output node. This indicates that many multivariate nonlinear regression problems may be computationally infeasible. |
---|---|
ISSN: | 0018-9448 1557-9654 |
DOI: | 10.1109/18.567673 |