SepNE: Bringing Separability to Network Embedding
Many successful methods have been proposed for learning low dimensional representations on large-scale networks, while almost all existing methods are designed in inseparable processes, learning embeddings for entire networks even when only a small proportion of nodes are of interest. This leads to...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Many successful methods have been proposed for learning low dimensional
representations on large-scale networks, while almost all existing methods are
designed in inseparable processes, learning embeddings for entire networks even
when only a small proportion of nodes are of interest. This leads to great
inconvenience, especially on super-large or dynamic networks, where these
methods become almost impossible to implement. In this paper, we formalize the
problem of separated matrix factorization, based on which we elaborate a novel
objective function that preserves both local and global information. We further
propose SepNE, a simple and flexible network embedding algorithm which
independently learns representations for different subsets of nodes in
separated processes. By implementing separability, our algorithm reduces the
redundant efforts to embed irrelevant nodes, yielding scalability to
super-large networks, automatic implementation in distributed learning and
further adaptations. We demonstrate the effectiveness of this approach on
several real-world networks with different scales and subjects. With comparable
accuracy, our approach significantly outperforms state-of-the-art baselines in
running times on large networks. |
---|---|
DOI: | 10.48550/arxiv.1811.05614 |