Training a Sigmoidal Node Is Hard
This article proves that the task of computing near-optimal weights for sigmoidal nodes under the regression norm is NP-Hard. For the special case where the sigmoid is piecewise linear, we prove a slightly stronger result: that computing the optimal weights is NP-Hard. These results parallel that fo...
Gespeichert in:
Veröffentlicht in: | Neural computation 1999-07, Vol.11 (5), p.1249-1260 |
---|---|
1. Verfasser: | |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | This article proves that the task of computing near-optimal weights for sigmoidal nodes under the
regression norm is NP-Hard. For the special case where the sigmoid is piecewise linear, we prove a slightly stronger result: that computing the optimal weights is NP-Hard. These results parallel that for the one-node pattern recognition problem—that determining the optimal weights for a threshold logic node is also intractable. Our results have important consequences for constructive algorithms that build a regression model one node at a time. It suggests that although such methods are (in principle) capable of producing efficient size representations (Barron, 1993; Jones, 1992), finding such representations may be computationally intractable. These results holds only in the deterministic sense; that is, they do not exclude the possibility that such representations may be found efficiently with high probability. In fact it motivates the use of heuristic or randomized algorithms for this problem. |
---|---|
ISSN: | 0899-7667 1530-888X |
DOI: | 10.1162/089976699300016449 |