Gradient Correlation Subspace Learning against Catastrophic Forgetting
Efficient continual learning techniques have been a topic of significant research over the last few years. A fundamental problem with such learning is severe degradation of performance on previously learned tasks, known also as catastrophic forgetting. This paper introduces a novel method to reduce...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Efficient continual learning techniques have been a topic of significant
research over the last few years. A fundamental problem with such learning is
severe degradation of performance on previously learned tasks, known also as
catastrophic forgetting. This paper introduces a novel method to reduce
catastrophic forgetting in the context of incremental class learning called
Gradient Correlation Subspace Learning (GCSL). The method detects a subspace of
the weights that is least affected by previous tasks and projects the weights
to train for the new task into said subspace. The method can be applied to one
or more layers of a given network architectures and the size of the subspace
used can be altered from layer to layer and task to task. Code will be
available at
\href{https://github.com/vgthengane/GCSL}{https://github.com/vgthengane/GCSL} |
---|---|
DOI: | 10.48550/arxiv.2403.02334 |