Locally Trained Piecewise Linear Classifiers
We describe a versatile technique for designing computer algorithms for separating multiple-dimensional data (feature vectors) into two classes. We refer to these algorithms as classifiers. Our classifiers achieve nearly Bayes-minimum error rates while requiring relatively small amounts of memory. O...
Gespeichert in:
Veröffentlicht in: | IEEE transactions on pattern analysis and machine intelligence 1980-03, Vol.PAMI-2 (2), p.101-111 |
---|---|
Hauptverfasser: | , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | We describe a versatile technique for designing computer algorithms for separating multiple-dimensional data (feature vectors) into two classes. We refer to these algorithms as classifiers. Our classifiers achieve nearly Bayes-minimum error rates while requiring relatively small amounts of memory. Our design procedure finds a set of close-opposed pairs of clusters of data. From these pairs the procedure generates a piecewise-linear approximation of the Bayes-optimum decision surface. A window training procedure on each linear segment of the approximation provides great flexibility of design over a wide range of class densities. The data consumed in the training of each segment are restricted to just those data lying near that segment, which makes possible the construction of efficient data bases for the training process. Interactive simplification of the classifier is facilitated by an adjacency matrix and an incidence matrix. The adjacency matrix describes the interrelationships of the linear segments {£i}. The incidence matrix describes the interrelationships among the polyhedrons formed by the hyperplanes containing {£i}. We exploit switching theory to minimize the decision logic. |
---|---|
ISSN: | 0162-8828 1939-3539 |
DOI: | 10.1109/TPAMI.1980.4766988 |