Towards characterizing the value of edge embeddings in Graph Neural Networks
Graph neural networks (GNNs) are the dominant approach to solving machine learning problems defined over graphs. Despite much theoretical and empirical work in recent years, our understanding of finer-grained aspects of architectural design for GNNs remains impoverished. In this paper, we consider t...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Graph neural networks (GNNs) are the dominant approach to solving machine
learning problems defined over graphs. Despite much theoretical and empirical
work in recent years, our understanding of finer-grained aspects of
architectural design for GNNs remains impoverished. In this paper, we consider
the benefits of architectures that maintain and update edge embeddings. On the
theoretical front, under a suitable computational abstraction for a layer in
the model, as well as memory constraints on the embeddings, we show that there
are natural tasks on graphical models for which architectures leveraging edge
embeddings can be much shallower. Our techniques are inspired by results on
time-space tradeoffs in theoretical computer science. Empirically, we show
architectures that maintain edge embeddings almost always improve on their
node-based counterparts -- frequently significantly so in topologies that have
``hub'' nodes. |
---|---|
DOI: | 10.48550/arxiv.2410.09867 |