Learning node embeddings via summary graphs: a brief theoretical analysis
Graph representation learning plays an important role in many graph mining applications, but learning embeddings of large-scale graphs remains a problem. Recent works try to improve scalability via graph summarization -- i.e., they learn embeddings on a smaller summary graph, and then restore the no...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Graph representation learning plays an important role in many graph mining
applications, but learning embeddings of large-scale graphs remains a problem.
Recent works try to improve scalability via graph summarization -- i.e., they
learn embeddings on a smaller summary graph, and then restore the node
embeddings of the original graph. However, all existing works depend on
heuristic designs and lack theoretical analysis.
Different from existing works, we contribute an in-depth theoretical analysis
of three specific embedding learning methods based on introduced kernel matrix,
and reveal that learning embeddings via graph summarization is actually
learning embeddings on a approximate graph constructed by the configuration
model. We also give analysis about approximation error. To the best of our
knowledge, this is the first work to give theoretical analysis of this
approach. Furthermore, our analysis framework gives interpretation of some
existing methods and provides great insights for future work on this problem. |
---|---|
DOI: | 10.48550/arxiv.2207.01189 |