Graph Representation Learning via Aggregation Enhancement
Graph neural networks (GNNs) have become a powerful tool for processing graph-structured data but still face challenges in effectively aggregating and propagating information between layers, which limits their performance. We tackle this problem with the kernel regression (KR) approach, using KR los...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Graph neural networks (GNNs) have become a powerful tool for processing
graph-structured data but still face challenges in effectively aggregating and
propagating information between layers, which limits their performance. We
tackle this problem with the kernel regression (KR) approach, using KR loss as
the primary loss in self-supervised settings or as a regularization term in
supervised settings. We show substantial performance improvements compared to
state-of-the-art in both scenarios on multiple transductive and inductive node
classification datasets, especially for deep networks. As opposed to mutual
information (MI), KR loss is convex and easy to estimate in high-dimensional
cases, even though it indirectly maximizes the MI between its inputs. Our work
highlights the potential of KR to advance the field of graph representation
learning and enhance the performance of GNNs. The code to reproduce our
experiments is available at https://github.com/Anonymous1252022/KR_for_GNNs |
---|---|
DOI: | 10.48550/arxiv.2201.12843 |