Technical Report for ICCV 2021 Challenge SSLAD-Track3B: Transformers Are Better Continual Learners
In the SSLAD-Track 3B challenge on continual learning, we propose the method of COntinual Learning with Transformer (COLT). We find that transformers suffer less from catastrophic forgetting compared to convolutional neural network. The major principle of our method is to equip the transformer based...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | In the SSLAD-Track 3B challenge on continual learning, we propose the method
of COntinual Learning with Transformer (COLT). We find that transformers suffer
less from catastrophic forgetting compared to convolutional neural network. The
major principle of our method is to equip the transformer based feature
extractor with old knowledge distillation and head expanding strategies to
compete catastrophic forgetting. In this report, we first introduce the overall
framework of continual learning for object detection. Then, we analyse the key
elements' effect on withstanding catastrophic forgetting in our solution. Our
method achieves 70.78 mAP on the SSLAD-Track 3B challenge test set. |
---|---|
DOI: | 10.48550/arxiv.2201.04924 |