Content to Node: Self-Translation Network Embedding

This paper concerns the problem of network embedding (NE), which aims to learn low-dimensional representations for network nodes. Such dense representations offer great promises for many network analysis problems. However, existing approaches are still faced with challenges posed by the characterist...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on knowledge and data engineering 2021-02, Vol.33 (2), p.431-443
Hauptverfasser: He, Zhicheng, Liu, Jie, Zeng, Yuyuan, Wei, Lai, Huang, Yalou
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:This paper concerns the problem of network embedding (NE), which aims to learn low-dimensional representations for network nodes. Such dense representations offer great promises for many network analysis problems. However, existing approaches are still faced with challenges posed by the characteristics of complex real-world networks. First, for networks associated with rich content information, previous methods often learn separated content and structure representations, which requires post-processing of combination. Empirical combination strategies often make the final vectors suboptimal. Second, existing methods preserve the structure information by considering short and fixed neighborhood scope, such as the first- and/or the second-order proximities. However, it is hard to decide the neighborhood scope in complex problems. To this end, we propose a novel sequence to sequence model based NE framework referred to as Self-Translation Network Embedding (STNE). With the sampled node sequences, STNE translates each sequence itself from the content sequence to the node sequence. On the one hand, the bi-directional LSTM encoder fuses the content and structure information seamlessly from the raw input. On the other hand, high-order proximity can be flexibly learned with the memories of LSTM to capture long-range structural information. Experimental results on three real-world datasets demonstrate the superiority of STNE.
ISSN:1041-4347
1558-2191
DOI:10.1109/TKDE.2019.2932388