Improving Neural-Network Classifiers Using Nearest Neighbor Partitioning
This paper presents a nearest neighbor partitioning method designed to improve the performance of a neural-network classifier. For neural-network classifiers, usually the number, positions, and labels of centroids are fixed in partition space before training. However, that approach limits the search...
Gespeichert in:
Veröffentlicht in: | IEEE transaction on neural networks and learning systems 2017-10, Vol.28 (10), p.2255-2267 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | This paper presents a nearest neighbor partitioning method designed to improve the performance of a neural-network classifier. For neural-network classifiers, usually the number, positions, and labels of centroids are fixed in partition space before training. However, that approach limits the search for potential neural networks during optimization; the quality of a neural network classifier is based on how clear the decision boundaries are between classes. Although attempts have been made to generate floating centroids automatically, these methods still tend to generate sphere-like partitions and cannot produce flexible decision boundaries. We propose the use of nearest neighbor classification in conjunction with a neural-network classifier. Instead of being bound by sphere-like boundaries (such as the case with centroid-based methods), the flexibility of nearest neighbors increases the chance of finding potential neural networks that have arbitrarily shaped boundaries in partition space. Experimental results demonstrate that the proposed method exhibits superior performance on accuracy and average f-measure. |
---|---|
ISSN: | 2162-237X 2162-2388 |
DOI: | 10.1109/TNNLS.2016.2580570 |