Rotational Invariant Dimensionality Reduction Algorithms

A common intrinsic limitation of the traditional subspace learning methods is the sensitivity to the outliers and the image variations of the object since they use the L 2 norm as the metric. In this paper, a series of methods based on the L 2,1 -norm are proposed for linear dimensionality reduction...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on cybernetics 2017-11, Vol.47 (11), p.3733-3746
Hauptverfasser: Zhihui Lai, Yong Xu, Jian Yang, Linlin Shen, Zhang, David
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:A common intrinsic limitation of the traditional subspace learning methods is the sensitivity to the outliers and the image variations of the object since they use the L 2 norm as the metric. In this paper, a series of methods based on the L 2,1 -norm are proposed for linear dimensionality reduction. Since the L 2,1 -norm based objective function is robust to the image variations, the proposed algorithms can perform robust image feature extraction for classification. We use different ideas to design different algorithms and obtain a unified rotational invariant (RI) dimensionality reduction framework, which extends the well-known graph embedding algorithm framework to a more generalized form. We provide the comprehensive analyses to show the essential properties of the proposed algorithm framework. This paper indicates that the optimization problems have global optimal solutions when all the orthogonal projections of the data space are computed and used. Experimental results on popular image datasets indicate that the proposed RI dimensionality reduction algorithms can obtain competitive performance compared with the previous L 2 norm based subspace learning algorithms.
ISSN:2168-2267
2168-2275
DOI:10.1109/TCYB.2016.2578642