Knowledge Accumulation in Continually Learned Representations and the Issue of Feature Forgetting
Transactions on Machine Learning Research (TMLR), 2024 Continual learning research has shown that neural networks suffer from catastrophic forgetting "at the output level", but it is debated whether this is also the case at the level of learned representations. Multiple recent studies ascr...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Transactions on Machine Learning Research (TMLR), 2024 Continual learning research has shown that neural networks suffer from
catastrophic forgetting "at the output level", but it is debated whether this
is also the case at the level of learned representations. Multiple recent
studies ascribe representations a certain level of innate robustness against
forgetting -- that they only forget minimally in comparison with forgetting at
the output level. We revisit and expand upon the experiments that revealed this
difference in forgetting and illustrate the coexistence of two phenomena that
affect the quality of continually learned representations: knowledge
accumulation and feature forgetting. Taking both aspects into account, we show
that, even though forgetting in the representation (i.e. feature forgetting)
can be small in absolute terms, when measuring relative to how much was learned
during a task, forgetting in the representation tends to be just as
catastrophic as forgetting at the output level. Next we show that this feature
forgetting is problematic as it substantially slows down the incremental
learning of good general representations (i.e. knowledge accumulation).
Finally, we study how feature forgetting and knowledge accumulation are
affected by different types of continual learning methods. |
---|---|
DOI: | 10.48550/arxiv.2304.00933 |