A new approach to perceptron training

The training of perceptrons is discussed in the framework of nonsmooth optimization. An investigation of Rosenblatt's perceptron training rule shows that convergence or the failure to converge in certain situations can be easily understood in this framework. An algorithm based on results from n...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transaction on neural networks and learning systems 2003-01, Vol.14 (1), p.216-221
Hauptverfasser: Eitzinger, C., Plach, H.
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:The training of perceptrons is discussed in the framework of nonsmooth optimization. An investigation of Rosenblatt's perceptron training rule shows that convergence or the failure to converge in certain situations can be easily understood in this framework. An algorithm based on results from nonsmooth optimization is proposed and its relation to the "constrained steepest descent" method is investigated. Numerical experiments verify that the "constrained steepest descent" algorithm may be further improved by the integration of methods from nonsmooth optimization.
ISSN:1045-9227
2162-237X
1941-0093
2162-2388
DOI:10.1109/TNN.2002.806631