Graph Continual Learning with Debiased Lossless Memory Replay
Real-life graph data often expands continually, rendering the learning of graph neural networks (GNNs) on static graph data impractical. Graph continual learning (GCL) tackles this problem by continually adapting GNNs to the expanded graph of the current task while maintaining the performance over t...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Real-life graph data often expands continually, rendering the learning of
graph neural networks (GNNs) on static graph data impractical. Graph continual
learning (GCL) tackles this problem by continually adapting GNNs to the
expanded graph of the current task while maintaining the performance over the
graph of previous tasks. Memory replay-based methods, which aim to replay data
of previous tasks when learning new tasks, have been explored as one principled
approach to mitigate the forgetting of the knowledge learned from the previous
tasks. In this paper we extend this methodology with a novel framework, called
Debiased Lossless Memory replay (DeLoMe). Unlike existing methods that sample
nodes/edges of previous graphs to construct the memory, DeLoMe learns small
lossless synthetic node representations as the memory. The learned memory can
not only preserve the graph data privacy but also capture the holistic graph
information, for which the sampling-based methods are not viable. Further,
prior methods suffer from bias toward the current task due to the data
imbalance between the classes in the memory data and the current data. A
debiased GCL loss function is devised in DeLoMe to effectively alleviate this
bias. Extensive experiments on four graph datasets show the effectiveness of
DeLoMe under both class- and task-incremental learning settings. |
---|---|
DOI: | 10.48550/arxiv.2404.10984 |