Derivation of learning vector quantization algorithms
A formal derivation of three learning rules for the adaptation of the synaptic weight vectors of neurons representing the prototype vectors of the class distribution in a classifier is presented. A decision surface function and a set of adaptation algorithms for adjusting this surface which are deri...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Tagungsbericht |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | A formal derivation of three learning rules for the adaptation of the synaptic weight vectors of neurons representing the prototype vectors of the class distribution in a classifier is presented. A decision surface function and a set of adaptation algorithms for adjusting this surface which are derived by using the gradient-descent approach to minimize the classification error are derived. This also provides a formal analysis of the Kohonen learning vector quantization (LVQ1 and LVQ2) algorithms. In particular, it is shown that to minimize the classification error, one of the learning equations in the LVQ1 algorithm is not required. An application of the learning algorithms for designing a neural network classifier is presented. The performance of the classifier was tested and compared to the K-NN decision rule for the Iris real data set.< > |
---|---|
DOI: | 10.1109/IJCNN.1992.227115 |