Robust Nonnegative Patch Alignment for Dimensionality Reduction

Dimensionality reduction is an important method to analyze high-dimensional data and has many applications in pattern recognition and computer vision. In this paper, we propose a robust nonnegative patch alignment for dimensionality reduction, which includes a reconstruction error term and a whole a...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transaction on neural networks and learning systems 2015-11, Vol.26 (11), p.2760-2774
Hauptverfasser: Xinge You, Weihua Ou, Chen, Chun Lung Philip, Qiang Li, Ziqi Zhu, Yuanyan Tang
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Dimensionality reduction is an important method to analyze high-dimensional data and has many applications in pattern recognition and computer vision. In this paper, we propose a robust nonnegative patch alignment for dimensionality reduction, which includes a reconstruction error term and a whole alignment term. We use correntropy-induced metric to measure the reconstruction error, in which the weight is learned adaptively for each entry. For the whole alignment, we propose locality-preserving robust nonnegative patch alignment (LP-RNA) and sparsity-preserviing robust nonnegative patch alignment (SP-RNA), which are unsupervised and supervised, respectively. In the LP-RNA, we propose a locally sparse graph to encode the local geometric structure of the manifold embedded in high-dimensional space. In particular, we select large p -nearest neighbors for each sample, then obtain the sparse representation with respect to these neighbors. The sparse representation is used to build a graph, which simultaneously enjoys locality, sparseness, and robustness. In the SP-RNA, we simultaneously use local geometric structure and discriminative information, in which the sparse reconstruction coefficient is used to characterize the local geometric structure and weighted distance is used to measure the separability of different classes. For the induced nonconvex objective function, we formulate it into a weighted nonnegative matrix factorization based on half-quadratic optimization. We propose a multiplicative update rule to solve this function and show that the objective function converges to a local optimum. Several experimental results on synthetic and real data sets demonstrate that the learned representation is more discriminative and robust than most existing dimensionality reduction methods.
ISSN:2162-237X
2162-2388
DOI:10.1109/TNNLS.2015.2393886