On the Universality of Graph Neural Networks on Large Random Graphs
We study the approximation power of Graph Neural Networks (GNNs) on latent position random graphs. In the large graph limit, GNNs are known to converge to certain "continuous" models known as c-GNNs, which directly enables a study of their approximation power on random graph models. In the...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | We study the approximation power of Graph Neural Networks (GNNs) on latent
position random graphs. In the large graph limit, GNNs are known to converge to
certain "continuous" models known as c-GNNs, which directly enables a study of
their approximation power on random graph models. In the absence of input node
features however, just as GNNs are limited by the Weisfeiler-Lehman isomorphism
test, c-GNNs will be severely limited on simple random graph models. For
instance, they will fail to distinguish the communities of a well-separated
Stochastic Block Model (SBM) with constant degree function. Thus, we consider
recently proposed architectures that augment GNNs with unique node identifiers,
referred to as Structural GNNs here (SGNNs). We study the convergence of SGNNs
to their continuous counterpart (c-SGNNs) in the large random graph limit,
under new conditions on the node identifiers. We then show that c-SGNNs are
strictly more powerful than c-GNNs in the continuous limit, and prove their
universality on several random graph models of interest, including most SBMs
and a large class of random geometric graphs. Our results cover both
permutation-invariant and permutation-equivariant architectures. |
---|---|
DOI: | 10.48550/arxiv.2105.13099 |