Correlation-assisted nearest shrunken centroid classifier with applications for high dimensional spectral data

High throughput data are frequently observed in contemporary chemical studies. Classification through spectral information is an important issue in chemometrics. Linear discriminant analysis (LDA) fails in the large‐p‐small‐n situation for two main reasons: (1) the sample covariance matrix is singul...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Journal of chemometrics 2016-01, Vol.30 (1), p.37-45
Hauptverfasser: Xu, Jian, Xu, Qingsong, Yi, Lunzhao, Chan, Chi-On, Mok, Daniel Kam-Wah
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:High throughput data are frequently observed in contemporary chemical studies. Classification through spectral information is an important issue in chemometrics. Linear discriminant analysis (LDA) fails in the large‐p‐small‐n situation for two main reasons: (1) the sample covariance matrix is singular when p > n and (2) there is an accumulation of noise in the estimation of the class centroid in high dimensional feature space. The Independence Rule is a class of methods used to overcome these drawbacks by ignoring the correlation information between spectral variables. However, a strong correlation is an essential characteristic of spectral data. We proposed a new correlation‐assisted nearest shrunken centroid classifier (CA‐NSC) to incorporate correlation information into the classification. CA‐NSC combines two sources of information [class centroid (mean) and correlation structure (variance)] to generate the classification. We used two real data analyses and a simulation study to verify our CA‐NSC method. In addition to NSC, we also performed a comparison with the soft independent modeling of class analogy (SIMCA) approach, which uses only correlation structure information for classification. The results show that CA‐NSC consistently improves on NSC and SIMCA. The misclassification rate of CA‐NSC is reduced by almost half compared with NSC in one of the real data analyses. Generally, correlation among variables will worsen the performance of NSC, even though the discriminatory information contained in the class centroid remains unchanged. If only correlation structure information is used (as in the case of SIMCA), the result will be satisfactory only when the correlation structure alone can provide sufficient information for classification. Copyright © 2015 John Wiley & Sons, Ltd. CA‐NSC combines class centroid and correlation structure information to generate the classification. The method constructs PCA models on different subsets of variables to depict different classes. It is able to calculate the probabilities of a sample being assigned to every class.
ISSN:0886-9383
1099-128X
DOI:10.1002/cem.2768