Slightly Shift New Classes to Remember Old Classes for Video Class-Incremental Learning
Recent video class-incremental learning usually excessively pursues the accuracy of the newly seen classes and relies on memory sets to mitigate catastrophic forgetting of the old classes. However, limited storage only allows storing a few representative videos. So we propose SNRO, which slightly sh...
Gespeichert in:
Hauptverfasser: | , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Recent video class-incremental learning usually excessively pursues the
accuracy of the newly seen classes and relies on memory sets to mitigate
catastrophic forgetting of the old classes. However, limited storage only
allows storing a few representative videos. So we propose SNRO, which slightly
shifts the features of new classes to remember old classes. Specifically, SNRO
contains Examples Sparse(ES) and Early Break(EB). ES decimates at a lower
sample rate to build memory sets and uses interpolation to align those sparse
frames in the future. By this, SNRO stores more examples under the same memory
consumption and forces the model to focus on low-semantic features which are
harder to be forgotten. EB terminates the training at a small epoch, preventing
the model from overstretching into the high-semantic space of the current task.
Experiments on UCF101, HMDB51, and UESTC-MMEA-CL datasets show that SNRO
performs better than other approaches while consuming the same memory
consumption. |
---|---|
DOI: | 10.48550/arxiv.2404.00901 |