Hierarchical graph augmented stacked autoencoders for multi-view representation learning
With recent success of deep neural networks, stacked autoencoder networks have received a lot of attention for robust unsupervised representation learning. However, recent autoencoder methods cannot make full use of multi-view information and thus fail to further improve many real-world applications...
Gespeichert in:
Veröffentlicht in: | Information fusion 2024-02, Vol.102, p.102068, Article 102068 |
---|---|
Hauptverfasser: | , , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | With recent success of deep neural networks, stacked autoencoder networks have received a lot of attention for robust unsupervised representation learning. However, recent autoencoder methods cannot make full use of multi-view information and thus fail to further improve many real-world applications by exploring the geometric structures of multi-view data. To address the above-mentioned issue, we introduce hierarchical graph augmented stacked autoencoders (HGSAE) for unsupervised multi-view representation learning. Specifically, a hierarchical graph structure is first adapted to stacked autoencoders to learn view-specific representations, aiming to preserve the geometric information of multi-view data through local and non-local graph regularizations. A general or common representation can then be learned by reconstructing each single view using fully connected neural networks. By doing this, the proposed method not only preserves the geometric information in multi-view data but also automatically balances the complementarity/consistency among different views. Extensive experiments on six popular unsupervised representation learning datasets demonstrate the effectiveness of our method when compared with recent state-of-the-art autoencoder methods.
•Design local and non-local graph constructions for view-specific representations.•Adapt hierarchical graph structure to stacked autoencoders.•Propose hierarchical graph augmented stacked autoencoders. |
---|---|
ISSN: | 1566-2535 1872-6305 |
DOI: | 10.1016/j.inffus.2023.102068 |