Early Preparation Pays Off: New Classifier Pre-tuning for Class Incremental Semantic Segmentation
Class incremental semantic segmentation aims to preserve old knowledge while learning new tasks, however, it is impeded by catastrophic forgetting and background shift issues. Prior works indicate the pivotal importance of initializing new classifiers and mainly focus on transferring knowledge from...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Class incremental semantic segmentation aims to preserve old knowledge while
learning new tasks, however, it is impeded by catastrophic forgetting and
background shift issues. Prior works indicate the pivotal importance of
initializing new classifiers and mainly focus on transferring knowledge from
the background classifier or preparing classifiers for future classes,
neglecting the flexibility and variance of new classifiers. In this paper, we
propose a new classifier pre-tuning~(NeST) method applied before the formal
training process, learning a transformation from old classifiers to generate
new classifiers for initialization rather than directly tuning the parameters
of new classifiers. Our method can make new classifiers align with the backbone
and adapt to the new data, preventing drastic changes in the feature extractor
when learning new classes. Besides, we design a strategy considering the
cross-task class similarity to initialize matrices used in the transformation,
helping achieve the stability-plasticity trade-off. Experiments on Pascal VOC
2012 and ADE20K datasets show that the proposed strategy can significantly
improve the performance of previous methods. The code is available at
\url{https://github.com/zhengyuan-xie/ECCV24_NeST}. |
---|---|
DOI: | 10.48550/arxiv.2407.14142 |