Kernel nonnegative representation-based classifier
Non-negativity is a critical and explainable property in linear representation-based methods leading to promising performances in the pattern classification field. Based on the non-negativity, a powerful linear representation-based classifier was proposed, namely non-negative representation-based cl...
Gespeichert in:
Veröffentlicht in: | Applied intelligence (Dordrecht, Netherlands) Netherlands), 2022, Vol.52 (2), p.2269-2289 |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Non-negativity is a critical and explainable property in linear representation-based methods leading to promising performances in the pattern classification field. Based on the non-negativity, a powerful linear representation-based classifier was proposed, namely non-negative representation-based classifier (NRC). With the non-negativity constraint, the NRC enhances the power of the homogeneous samples in the linear representation, while suppressing the representation of the heterogeneous samples, since the homogeneous samples tend to have a positive correlation with the test sample. However, the NRC performs the non-negative representation on the original feature space instead of the high-dimensional non-linear feature space, where it is usually considered when the data samples are not separable with each other. This leads to the poor performance of NRC, especially on high-dimensional data like images. In this paper, we proposed a Kernel Non-negative Representation-based Classifier (KNRC) for addressing this problem to achieve better results in pattern classification. Furthermore, we extended the KNRC to a dimensionality reduction version to reduce the dimensions of the KNRC’s feature space as well as improve its classification ability. We provide extensive numerical experiments including analysis and comparisons on 12 datasets (8 UCI datasets and 4 image datasets) to validate the state-of-the-art performance obtained by the proposed method. |
---|---|
ISSN: | 0924-669X 1573-7497 |
DOI: | 10.1007/s10489-021-02486-0 |