CLIP model is an Efficient Continual Learner
The continual learning setting aims to learn new tasks over time without forgetting the previous ones. The literature reports several significant efforts to tackle this problem with limited or no access to previous task data. Among such efforts, typical solutions offer sophisticated techniques invol...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | The continual learning setting aims to learn new tasks over time without
forgetting the previous ones. The literature reports several significant
efforts to tackle this problem with limited or no access to previous task data.
Among such efforts, typical solutions offer sophisticated techniques involving
memory replay, knowledge distillation, model regularization, and dynamic
network expansion. The resulting methods have a retraining cost at each
learning task, dedicated memory requirements, and setting-specific design
choices. In this work, we show that a frozen CLIP (Contrastive Language-Image
Pretraining) model offers astounding continual learning performance without any
fine-tuning (zero-shot evaluation). We evaluate CLIP under a variety of
settings including class-incremental, domain-incremental and task-agnostic
incremental learning on five popular benchmarks (ImageNet-100 & 1K, CORe50,
CIFAR-100, and TinyImageNet). Without any bells and whistles, the CLIP model
outperforms the state-of-the-art continual learning approaches in the majority
of the settings. We show the effect on the CLIP model's performance by varying
text inputs with simple prompt templates. To the best of our knowledge, this is
the first work to report the CLIP zero-shot performance in a continual setting.
We advocate the use of this strong yet embarrassingly simple baseline for
future comparisons in the continual learning tasks. |
---|---|
DOI: | 10.48550/arxiv.2210.03114 |