Diffusing to the Top: Boost Graph Neural Networks with Minimal Hyperparameter Tuning
Graph Neural Networks (GNNs) are proficient in graph representation learning and achieve promising performance on versatile tasks such as node classification and link prediction. Usually, a comprehensive hyperparameter tuning is essential for fully unlocking GNN's top performance, especially fo...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Graph Neural Networks (GNNs) are proficient in graph representation learning
and achieve promising performance on versatile tasks such as node
classification and link prediction. Usually, a comprehensive hyperparameter
tuning is essential for fully unlocking GNN's top performance, especially for
complicated tasks such as node classification on large graphs and long-range
graphs. This is usually associated with high computational and time costs and
careful design of appropriate search spaces. This work introduces a
graph-conditioned latent diffusion framework (GNN-Diff) to generate
high-performing GNNs based on the model checkpoints of sub-optimal
hyperparameters selected by a light-tuning coarse search. We validate our
method through 166 experiments across four graph tasks: node classification on
small, large, and long-range graphs, as well as link prediction. Our
experiments involve 10 classic and state-of-the-art target models and 20
publicly available datasets. The results consistently demonstrate that
GNN-Diff: (1) boosts the performance of GNNs with efficient hyperparameter
tuning; and (2) presents high stability and generalizability on unseen data
across multiple generation runs. The code is available at
https://github.com/lequanlin/GNN-Diff. |
---|---|
DOI: | 10.48550/arxiv.2410.05697 |