TLVANE: a two-level variation model for attributed network embedding
Network embedding aims to learn low-dimensional representations for nodes in social networks, which can serve many applications, such as node classification, link prediction and visualization. Most of network embedding methods focus on learning the representations solely from the topological structu...
Gespeichert in:
Veröffentlicht in: | Neural computing & applications 2020-05, Vol.32 (9), p.4835-4847 |
---|---|
Hauptverfasser: | , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Network embedding aims to learn low-dimensional representations for nodes in social networks, which can serve many applications, such as node classification, link prediction and visualization. Most of network embedding methods focus on learning the representations solely from the topological structure. Recently, attributed network embedding, which utilizes both the topological structure and node content to jointly learn latent representations, becomes a hot topic. However, previous studies obtain the joint representations by directly concatenating the one from each aspect, which may lose the correlations between the topological structure and node content. In this paper, we propose a new attributed network embedding method,
TLVANE
, which can address the drawback by exploiting the deep variational autoencoders (VAEs). Particularly, a two-level VAE model is built, where the first-level accounts for the joint representations while the second for the embeddings of each aspect. Extensive experiments on three real-world datasets have been conducted, and the results demonstrate the superiority of the proposed method against state-of-the-art competitors. |
---|---|
ISSN: | 0941-0643 1433-3058 |
DOI: | 10.1007/s00521-018-3875-5 |