Kernel Reverse Neighborhood Discriminant Analysis

Currently, neighborhood linear discriminant analysis (nLDA) exploits reverse nearest neighbors (RNN) to avoid the assumption of linear discriminant analysis (LDA) that all samples from the same class should be independently and identically distributed (i.i.d.). nLDA performs well when a dataset cont...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Electronics (Basel) 2023-03, Vol.12 (6), p.1322
Hauptverfasser: Li, Wangwang, Tan, Hengliang, Feng, Jianwei, Xie, Ming, Du, Jiao, Yang, Shuo, Yan, Guofeng
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Currently, neighborhood linear discriminant analysis (nLDA) exploits reverse nearest neighbors (RNN) to avoid the assumption of linear discriminant analysis (LDA) that all samples from the same class should be independently and identically distributed (i.i.d.). nLDA performs well when a dataset contains multimodal classes. However, in complex pattern recognition tasks, such as visual classification, the complex appearance variations caused by deformation, illumination and visual angle often generate non-linearity. Furthermore, it is not easy to separate the multimodal classes in lower-dimensional feature space. One solution to these problems is to map the feature to a higher-dimensional feature space for discriminant learning. Hence, in this paper, we employ kernel functions to map the original data to a higher-dimensional feature space, where the nonlinear multimodal classes can be better classified. We give the details of the deduction of the proposed kernel reverse neighborhood discriminant analysis (KRNDA) with the kernel tricks. The proposed KRNDA outperforms the original nLDA on most datasets of the UCI benchmark database. In high-dimensional visual recognition tasks of handwritten digit recognition, object categorization and face recognition, our KRNDA achieves the best recognition results compared to several sophisticated LDA-based discriminators.
ISSN:2079-9292
2079-9292
DOI:10.3390/electronics12061322