Robust Image Feature Extraction via Approximate Orthogonal Low-Rank Embedding

Feature extraction (FE) plays an important role in machine learning. In order to handle the "dimensionality disaster" problem, the usual approach is to transform the original sample into a low-dimensional target space, in which the FE task is performed. However, the data in reality will al...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE access 2020, Vol.8, p.193226-193237
Hauptverfasser: Fu, Cong, Liu, Zhigui, Li, Li
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Feature extraction (FE) plays an important role in machine learning. In order to handle the "dimensionality disaster" problem, the usual approach is to transform the original sample into a low-dimensional target space, in which the FE task is performed. However, the data in reality will always be corrupted by various noises or interfered by outliers, making the task of feature extraction extremely challenging. Thence, we propose a novel image FE method via approximate orthogonal low-rank embedding (AOLRE), which adopts an orthogonal matrix to reserve the major energy of the samples, and the introduction of the \ell _{2,1} -norm makes the features more compact, differentiated and interpretable. In addition, the weighted Schatten {p} -norm is adopted in this model for fully exploring the effects of different ranks while approaching the original hypothesis of low rank. Meanwhile, as a robust measure, the correntropy is applied in AOLRE. This can effectively suppress the adverse influences of contaminated data and enhance the robustness of the algorithm. Finally, the introduction of the classification loss item allows our model to effectively fit the supervised scene. Five common datasets are used to evaluate the performance of AOLRE. The results show that the recognition accuracy and robustness of AOLRE are significantly better than those of several advanced FE algorithms, and the improvement rate ranges from 2% to 15%.
ISSN:2169-3536
2169-3536
DOI:10.1109/ACCESS.2020.3033093