E-CGL: An Efficient Continual Graph Learner
Continual learning has emerged as a crucial paradigm for learning from sequential data while preserving previous knowledge. In the realm of continual graph learning, where graphs continuously evolve based on streaming graph data, continual graph learning presents unique challenges that require adapt...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Continual learning has emerged as a crucial paradigm for learning from
sequential data while preserving previous knowledge. In the realm of continual
graph learning, where graphs continuously evolve based on streaming graph data,
continual graph learning presents unique challenges that require adaptive and
efficient graph learning methods in addition to the problem of catastrophic
forgetting. The first challenge arises from the interdependencies between
different graph data, where previous graphs can influence new data
distributions. The second challenge lies in the efficiency concern when dealing
with large graphs. To addresses these two problems, we produce an Efficient
Continual Graph Learner (E-CGL) in this paper. We tackle the interdependencies
issue by demonstrating the effectiveness of replay strategies and introducing a
combined sampling strategy that considers both node importance and diversity.
To overcome the limitation of efficiency, E-CGL leverages a simple yet
effective MLP model that shares weights with a GCN during training, achieving
acceleration by circumventing the computationally expensive message passing
process. Our method comprehensively surpasses nine baselines on four graph
continual learning datasets under two settings, meanwhile E-CGL largely reduces
the catastrophic forgetting problem down to an average of -1.1%. Additionally,
E-CGL achieves an average of 15.83x training time acceleration and 4.89x
inference time acceleration across the four datasets. These results indicate
that E-CGL not only effectively manages the correlation between different graph
data during continual training but also enhances the efficiency of continual
learning on large graphs. The code is publicly available at
https://github.com/aubreygjh/E-CGL. |
---|---|
DOI: | 10.48550/arxiv.2408.09350 |