Beyond Weisfeiler-Lehman: A Quantitative Framework for GNN Expressiveness
Designing expressive Graph Neural Networks (GNNs) is a fundamental topic in the graph learning community. So far, GNN expressiveness has been primarily assessed via the Weisfeiler-Lehman (WL) hierarchy. However, such an expressivity measure has notable limitations: it is inherently coarse, qualitati...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Designing expressive Graph Neural Networks (GNNs) is a fundamental topic in
the graph learning community. So far, GNN expressiveness has been primarily
assessed via the Weisfeiler-Lehman (WL) hierarchy. However, such an
expressivity measure has notable limitations: it is inherently coarse,
qualitative, and may not well reflect practical requirements (e.g., the ability
to encode substructures). In this paper, we introduce a unified framework for
quantitatively studying the expressiveness of GNN architectures, addressing all
the above limitations. Specifically, we identify a fundamental expressivity
measure termed homomorphism expressivity, which quantifies the ability of GNN
models to count graphs under homomorphism. Homomorphism expressivity offers a
complete and practical assessment tool: the completeness enables direct
expressivity comparisons between GNN models, while the practicality allows for
understanding concrete GNN abilities such as subgraph counting. By examining
four classes of prominent GNNs as case studies, we derive simple, unified, and
elegant descriptions of their homomorphism expressivity for both invariant and
equivariant settings. Our results provide novel insights into a series of
previous work, unify the landscape of different subareas in the community, and
settle several open questions. Empirically, extensive experiments on both
synthetic and real-world tasks verify our theory, showing that the practical
performance of GNN models aligns well with the proposed metric. |
---|---|
DOI: | 10.48550/arxiv.2401.08514 |