Enhancing GNNs with Architecture-Agnostic Graph Transformations: A Systematic Analysis
In recent years, a wide variety of graph neural network (GNN) architectures have emerged, each with its own strengths, weaknesses, and complexities. Various techniques, including rewiring, lifting, and node annotation with centrality values, have been employed as pre-processing steps to enhance GNN...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | In recent years, a wide variety of graph neural network (GNN) architectures
have emerged, each with its own strengths, weaknesses, and complexities.
Various techniques, including rewiring, lifting, and node annotation with
centrality values, have been employed as pre-processing steps to enhance GNN
performance. However, there are no universally accepted best practices, and the
impact of architecture and pre-processing on performance often remains opaque.
This study systematically explores the impact of various graph
transformations as pre-processing steps on the performance of common GNN
architectures across standard datasets. The models are evaluated based on their
ability to distinguish non-isomorphic graphs, referred to as expressivity.
Our findings reveal that certain transformations, particularly those
augmenting node features with centrality measures, consistently improve
expressivity. However, these gains come with trade-offs, as methods like graph
encoding, while enhancing expressivity, introduce numerical inaccuracies
widely-used python packages. Additionally, we observe that these pre-processing
techniques are limited when addressing complex tasks involving 3-WL and 4-WL
indistinguishable graphs. |
---|---|
DOI: | 10.48550/arxiv.2410.08759 |