Where Did the Gap Go? Reassessing the Long-Range Graph Benchmark
The recent Long-Range Graph Benchmark (LRGB, Dwivedi et al. 2022) introduced a set of graph learning tasks strongly dependent on long-range interaction between vertices. Empirical evidence suggests that on these tasks Graph Transformers significantly outperform Message Passing GNNs (MPGNNs). In this...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | The recent Long-Range Graph Benchmark (LRGB, Dwivedi et al. 2022) introduced
a set of graph learning tasks strongly dependent on long-range interaction
between vertices. Empirical evidence suggests that on these tasks Graph
Transformers significantly outperform Message Passing GNNs (MPGNNs). In this
paper, we carefully reevaluate multiple MPGNN baselines as well as the Graph
Transformer GPS (Ramp\'a\v{s}ek et al. 2022) on LRGB. Through a rigorous
empirical analysis, we demonstrate that the reported performance gap is
overestimated due to suboptimal hyperparameter choices. It is noteworthy that
across multiple datasets the performance gap completely vanishes after basic
hyperparameter optimization. In addition, we discuss the impact of lacking
feature normalization for LRGB's vision datasets and highlight a spurious
implementation of LRGB's link prediction metric. The principal aim of our paper
is to establish a higher standard of empirical rigor within the graph machine
learning community. |
---|---|
DOI: | 10.48550/arxiv.2309.00367 |