IOSL: Incremental Open Set Learning
Class incremental learning (CIL) has drawn wide attention in academic researches. However, most existing methods cannot be applied to some practical scenarios in which unknown classes occur during the inference stage. To solve this problem, we target a more challenging and realistic setting: Increme...
Gespeichert in:
Veröffentlicht in: | IEEE transactions on circuits and systems for video technology 2024-04, Vol.34 (4), p.1-1 |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Class incremental learning (CIL) has drawn wide attention in academic researches. However, most existing methods cannot be applied to some practical scenarios in which unknown classes occur during the inference stage. To solve this problem, we target a more challenging and realistic setting: Incremental Open Set Learning (IOSL), which needs to reject unknown classes from test data while incrementally learning new classes. IOSL has two coupled key challenges: 1) overcoming the catastrophic forgetting of old classes when learning new classes incrementally due to the rarity of old training samples, and 2) minimizing the empirical classification risk on known classes and the open space risk on unknown classes. To address these challenges, we propose an incremental open-set learning method with a "future-look" ability. This ability reserves embedding space for incrementally arriving new classes and potential unknown classes simultaneously to alleviate the catastrophic forgetting indirectly and recognize unknown classes well. Specifically, a normalized prototype learning strategy is designed to minimize the empirical classification risk and implicitly reserve some space. Moreover, we design an extra classes synthesizing module to explicitly reserve more suitable space. This further minimizes the empirical classification risk while reducing the open space risk. Furthermore, we develop an adaptive metric learning loss to mitigate the class imbalance between old and new classes, which focuses on exploiting exemplars fully and selects an adaptive margin for pairs of old and new classes. Extensive experiments on representative classification datasets validate the superiority of our method. |
---|---|
ISSN: | 1051-8215 1558-2205 |
DOI: | 10.1109/TCSVT.2023.3304838 |