Graph Neural Networks Need Cluster-Normalize-Activate Modules
Graph Neural Networks (GNNs) are non-Euclidean deep learning models for graph-structured data. Despite their successful and diverse applications, oversmoothing prohibits deep architectures due to node features converging to a single fixed point. This severely limits their potential to solve complex...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Graph Neural Networks (GNNs) are non-Euclidean deep learning models for
graph-structured data. Despite their successful and diverse applications,
oversmoothing prohibits deep architectures due to node features converging to a
single fixed point. This severely limits their potential to solve complex
tasks. To counteract this tendency, we propose a plug-and-play module
consisting of three steps: Cluster-Normalize-Activate (CNA). By applying CNA
modules, GNNs search and form super nodes in each layer, which are normalized
and activated individually. We demonstrate in node classification and property
prediction tasks that CNA significantly improves the accuracy over the
state-of-the-art. Particularly, CNA reaches 94.18% and 95.75% accuracy on Cora
and CiteSeer, respectively. It further benefits GNNs in regression tasks as
well, reducing the mean squared error compared to all baselines. At the same
time, GNNs with CNA require substantially fewer learnable parameters than
competing architectures. |
---|---|
DOI: | 10.48550/arxiv.2412.04064 |