An efficient KPCA algorithm based on feature correlation evaluation
Classic kernel principal component analysis (KPCA) is less computationally efficient when extracting features from large data sets. In this paper, we propose an algorithm, that is, efficient KPCA (EKPCA), that enhances the computational efficiency of KPCA by using a linear combination of a small por...
Gespeichert in:
Veröffentlicht in: | Neural computing & applications 2014-06, Vol.24 (7-8), p.1795-1806 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Classic kernel principal component analysis (KPCA) is less computationally efficient when extracting features from large data sets. In this paper, we propose an algorithm, that is, efficient KPCA (EKPCA), that enhances the computational efficiency of KPCA by using a linear combination of a small portion of training samples, referred to as basic patterns, to approximately express the KPCA feature extractor, that is, the eigenvector of the covariance matrix in the feature extraction. We show that the feature correlation (i.e., the correlation between different feature components) can be evaluated by the cosine distance between the kernel vectors, which are the column vectors in the kernel matrix. The proposed algorithm can be easily implemented. It first uses feature correlation evaluation to determine the basic patterns and then uses these to reconstruct the KPCA model, perform feature extraction, and classify the test samples. Since there are usually many fewer basic patterns than training samples, EKPCA feature extraction is much more computationally efficient than that of KPCA. Experimental results on several benchmark data sets show that EKPCA is much faster than KPCA while achieving similar classification performance. |
---|---|
ISSN: | 0941-0643 1433-3058 |
DOI: | 10.1007/s00521-013-1424-9 |