Defying Imbalanced Forgetting in Class Incremental Learning
We observe a high level of imbalance in the accuracy of different classes in the same old task for the first time. This intriguing phenomenon, discovered in replay-based Class Incremental Learning (CIL), highlights the imbalanced forgetting of learned classes, as their accuracy is similar before the...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | We observe a high level of imbalance in the accuracy of different classes in
the same old task for the first time. This intriguing phenomenon, discovered in
replay-based Class Incremental Learning (CIL), highlights the imbalanced
forgetting of learned classes, as their accuracy is similar before the
occurrence of catastrophic forgetting. This discovery remains previously
unidentified due to the reliance on average incremental accuracy as the
measurement for CIL, which assumes that the accuracy of classes within the same
task is similar. However, this assumption is invalid in the face of
catastrophic forgetting. Further empirical studies indicate that this
imbalanced forgetting is caused by conflicts in representation between
semantically similar old and new classes. These conflicts are rooted in the
data imbalance present in replay-based CIL methods. Building on these insights,
we propose CLass-Aware Disentanglement (CLAD) to predict the old classes that
are more likely to be forgotten and enhance their accuracy. Importantly, CLAD
can be seamlessly integrated into existing CIL methods. Extensive experiments
demonstrate that CLAD consistently improves current replay-based methods,
resulting in performance gains of up to 2.56%. |
---|---|
DOI: | 10.48550/arxiv.2403.14910 |