The Transition to Perfect Generalization in Perceptrons

Several recent papers (Gardner and Derrida 1989; Györgyi 1990; Sompolinsky . 1990) have found, using methods of statistical physics, that a transition to perfect generalization occurs in training a simple perceptron whose weights can only take values ±1. We give a rigorous proof of such a phenomena....

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Neural computation 1991-09, Vol.3 (3), p.386-401
Hauptverfasser: Baum, Eric B., Lyuu, Yuh-Dauh
Format: Artikel
Sprache:eng
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Several recent papers (Gardner and Derrida 1989; Györgyi 1990; Sompolinsky . 1990) have found, using methods of statistical physics, that a transition to perfect generalization occurs in training a simple perceptron whose weights can only take values ±1. We give a rigorous proof of such a phenomena. That is, we show, for α = 2.0821, that if at least α examples are drawn from the uniform distribution on {+1, −1} and classified according to a target perceptron ∈ {+1, −1} as positive or negative according to whether · is nonnegative or negative, then the probability is 2 that there is any other such perceptron consistent with the examples. Numerical results indicate further that perfect generalization holds for α as low as 1.5.
ISSN:0899-7667
1530-888X
DOI:10.1162/neco.1991.3.3.386