Generative feature-driven image replay for continual learning

Neural networks are prone to catastrophic forgetting when trained incrementally on different tasks. Popular incremental learning methods mitigate such forgetting by retaining a subset of previously seen samples and replaying them during the training on subsequent tasks. However, this is not always p...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Image and vision computing 2024-10, Vol.150, p.105187, Article 105187
Hauptverfasser: Thandiackal, Kevin, Portenier, Tiziano, Giovannini, Andrea, Gabrani, Maria, Goksel, Orcun
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Neural networks are prone to catastrophic forgetting when trained incrementally on different tasks. Popular incremental learning methods mitigate such forgetting by retaining a subset of previously seen samples and replaying them during the training on subsequent tasks. However, this is not always possible, e.g., due to data protection regulations. In such restricted scenarios, one can employ generative models to replay either artificial images or hidden features to a classifier. In this work, we propose Genifer (GENeratIve FEature-driven image Replay), where a generative model is trained to replay images that must induce the same hidden features as real samples when they are passed through the classifier. Our technique therefore incorporates the benefits of both image and feature replay, i.e.: (1) unlike conventional image replay, our generative model explicitly learns the distribution of features that are relevant for classification; (2) in contrast to feature replay, our entire classifier remains trainable; and (3) we can leverage image-space augmentations, which increase distillation performance while also mitigating overfitting during the training of the generative model. We show that Genifer substantially outperforms the previous state of the art for various settings on the CIFAR-100 and CUB-200 datasets. The code is available at: https://github.com/kevthan/feature-driven-image-replay. •Generative replay enables continual learning without storing previous exemplars.•Our feature-driven replay leverages the benefits of both image and feature replay.•Unlike image replay, the generative model learns a simplified feature distribution.•In contrast to conventional feature replay, the entire classifier remains trainable.•Augmentations improve distillation and reduce overfitting in the generative model.
ISSN:0262-8856
1872-8138
DOI:10.1016/j.imavis.2024.105187