Unsupervised Graph Embedding via Adaptive Graph Learning

Graph autoencoders (GAEs) are powerful tools in representation learning for graph embedding. However, the performance of GAEs is very dependent on the quality of the graph structure, i.e., of the adjacency matrix. In other words, GAEs would perform poorly when the adjacency matrix is incomplete or b...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on pattern analysis and machine intelligence 2023-04, Vol.45 (4), p.5329-5336
Hauptverfasser: Zhang, Rui, Zhang, Yunxing, Lu, Chengjun, Li, Xuelong
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Graph autoencoders (GAEs) are powerful tools in representation learning for graph embedding. However, the performance of GAEs is very dependent on the quality of the graph structure, i.e., of the adjacency matrix. In other words, GAEs would perform poorly when the adjacency matrix is incomplete or be disturbed. In this paper, two novel unsupervised graph embedding methods, unsupervised graph embedding via adaptive graph learning (BAGE) and unsupervised graph embedding via variational adaptive graph learning (VBAGE) are proposed. The proposed methods expand the application range of GAEs on graph embedding, i.e, on the general datasets without graph structure. Meanwhile, the adaptive learning mechanism can initialize the adjacency matrix without being affected by the parameter. Besides that, the latent representations are embedded with the Laplacian graph structure to preserve the topology structure of the graph in the vector space. Moreover, the adjacency matrix can be self-learned for better embedding performance when the original graph structure is incomplete. With adaptive learning, the proposed method is much more robust to the graph structure. Experimental studies on several datasets validate our design and demonstrate that our methods outperform baselines by a wide margin in node clustering, node classification, link prediction, and graph visualization tasks.
ISSN:0162-8828
1939-3539
2160-9292
DOI:10.1109/TPAMI.2022.3202158