Multiple kernel low-rank representation-based robust multi-view subspace clustering

Owing to the presence of complex noise, it is extremely challenging to learn a low-dimensional subspace structure directly from the original data. In addition, the nonlinear structure of the data makes multi-view subspace clustering more difficult. In this paper, we propose a multiple kernel low-ran...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Information sciences 2021-04, Vol.551, p.324-340
Hauptverfasser: Zhang, Xiaoqian, Ren, Zhenwen, Sun, Huaijiang, Bai, Keqiang, Feng, Xinghua, Liu, Zhigui
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Owing to the presence of complex noise, it is extremely challenging to learn a low-dimensional subspace structure directly from the original data. In addition, the nonlinear structure of the data makes multi-view subspace clustering more difficult. In this paper, we propose a multiple kernel low-rank representation-based robust multi-view subspace clustering method (MKLR-RMSC) that combines a learnable low-rank multiple kernel trick with co-regularization. MKLR-RMSC mainly conducts the following four tasks: 1) fully mining the complementary information provided by the different views in the feature spaces, 2) the containment of multiple low-dimensional subspaces in the feature space data, 3) allowing all view-specific representations towards a common centroid, and 4) effectively dealing with non-Gaussian noise in data. In our model, the weighted Schatten p-norm is applied to fully explore the effects of different ranks while approaching the original low-rank hypothesis. Moreover, different predefined learning kernel matrices are designed for different views, which is more conducive to mining the unique and complementary information of different views. In addition, as a robust measure, correntropy is applied in MKLR-RMSC. Our method is more effective and robust than several of the most advanced methods on six commonly used datasets.
ISSN:0020-0255
1872-6291
DOI:10.1016/j.ins.2020.10.059