How hard is to distinguish graphs with graph neural networks?
A hallmark of graph neural networks is their ability to distinguish the isomorphism class of their inputs. This study derives hardness results for the classification variant of graph isomorphism in the message-passing model (MPNN). MPNN encompasses the majority of graph neural networks used today an...
Gespeichert in:
1. Verfasser: | |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | A hallmark of graph neural networks is their ability to distinguish the
isomorphism class of their inputs. This study derives hardness results for the
classification variant of graph isomorphism in the message-passing model
(MPNN). MPNN encompasses the majority of graph neural networks used today and
is universal when nodes are given unique features. The analysis relies on the
introduced measure of communication capacity. Capacity measures how much
information the nodes of a network can exchange during the forward pass and
depends on the depth, message-size, global state, and width of the
architecture. It is shown that the capacity of MPNN needs to grow linearly with
the number of nodes so that a network can distinguish trees and quadratically
for general connected graphs. The derived bounds concern both worst- and
average-case behavior and apply to networks with/without unique features and
adaptive architecture -- they are also up to two orders of magnitude tighter
than those given by simpler arguments. An empirical study involving 12 graph
classification tasks and 420 networks reveals strong alignment between actual
performance and theoretical predictions. |
---|---|
DOI: | 10.48550/arxiv.2005.06649 |