Multi-view Embedding with Adaptive Shared Output and Similarity for unsupervised feature selection
The problem of multi-view feature selection, a kind of feature learning pattern, has raised considerable interests in the past decade. It is crucial for feature selection to maintain both the overall structure and locality of the original features. The existing unsupervised feature selection methods...
Gespeichert in:
Veröffentlicht in: | Knowledge-based systems 2019-02, Vol.165, p.40-52 |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | The problem of multi-view feature selection, a kind of feature learning pattern, has raised considerable interests in the past decade. It is crucial for feature selection to maintain both the overall structure and locality of the original features. The existing unsupervised feature selection methods mostly preserve either global or local structures, and compute the sparse representation for each view individually. Besides, several methods introduce a predefined similarity matrix among different views and fix it in the learning process, which consider less correlation between each single view. Thus, we focus on the multi-view feature selection and propose a new method, Multi-view Embedding with Adaptive Shared Output and Similarity (ME-ASOS). This method introduces embedding directly into multi-view learning, mapping the high-dimensional data to a shared subspace with the view-wise multi-output regular projections and learns a common similarity matrix through an improved algorithm to characterize structures across different views. A regulation parameter is used to largely eliminate the adverse effect of noisy and unfavorable features for global structures and another regularization term is used in local structure to avoid the trivial solution and add a prior of uniform distribution. Compared with 5 existing algorithms, the experimental results on 4 real-world datasets has shown that method ME-ASOS captures more related information between different views, selects better discriminative features and obtains superior accuracy and higher efficiency.
•The method ME-ASOS incorporates the global and local structure preserving, embedding from multiple views, a shared output subspace and the adaptive similarity matrix into a framework.•For the global structure, this method maps different views into a shared low-dimensional embedding subspace with view-wise multi-output regular projections.•For the local structure, it learns a common similarity matrix by an improved algorithm to characterize the structures across different views.•The experiments on public dataset show that the performance of ME-ASOS outperforms other state-of-the-art methods. |
---|---|
ISSN: | 0950-7051 1872-7409 |
DOI: | 10.1016/j.knosys.2018.11.017 |