A comprehensive, application-oriented study of catastrophic forgetting in DNNs
ICLR 2019 International Conference on Learning Representations We present a large-scale empirical study of catastrophic forgetting (CF) in modern Deep Neural Network (DNN) models that perform sequential (or: incremental) learning. A new experimental protocol is proposed that enforces typical constra...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | ICLR 2019 International Conference on Learning Representations We present a large-scale empirical study of catastrophic forgetting (CF) in
modern Deep Neural Network (DNN) models that perform sequential (or:
incremental) learning. A new experimental protocol is proposed that enforces
typical constraints encountered in application scenarios. As the investigation
is empirical, we evaluate CF behavior on the hitherto largest number of visual
classification datasets, from each of which we construct a representative
number of Sequential Learning Tasks (SLTs) in close alignment to previous works
on CF. Our results clearly indicate that there is no model that avoids CF for
all investigated datasets and SLTs under application conditions. We conclude
with a discussion of potential solutions and workarounds to CF, notably for the
EWC and IMM models. |
---|---|
DOI: | 10.48550/arxiv.1905.08101 |