A2AE: Towards adaptive multi-view graph representation learning via all-to-all graph autoencoder architecture

The multi-view graph is a fundamental data model, which is used to describe complex networks in the real world. Learning the representation of multi-view graphs is a vital step for understanding complex systems and extracting knowledge accurately. However, most existing methods focus on a certain vi...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Applied soft computing 2022-08, Vol.125, p.109193, Article 109193
Hauptverfasser: Sun, Dengdi, Li, Dashuang, Ding, Zhuanlian, Zhang, Xingyi, Tang, Jin
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:The multi-view graph is a fundamental data model, which is used to describe complex networks in the real world. Learning the representation of multi-view graphs is a vital step for understanding complex systems and extracting knowledge accurately. However, most existing methods focus on a certain view or simply add multiple views together, which prevents them from making the best of the rich relational information in multiple views and ignores the importance of different views. In this paper, a novel all-to-all graph autoencoder is proposed for multi-view graph representation learning, namely A2AE. The all-to-all model first embeds the attribute multi-view graph into compact representations by semantic fusing the view-specific compact representations from multi-encoders, and then multi-decoders are trained to reconstruct graph structure and attributes. Finally, a self-training clustering module is attached for clustering tasks. •The all-to-all graph autoencoder is proposed for unsupervised multi-view network embedding.•The attention mechanism is leveraged for adaptively learning the semantic weights of views.•The results show that the proposed algorithm is superior to the most advanced methods.
ISSN:1568-4946
1872-9681
DOI:10.1016/j.asoc.2022.109193