Out-Of-Distribution Detection In Unsupervised Continual Learning

Unsupervised continual learning aims to learn new tasks incrementally without requiring human annotations. However, most existing methods, especially those targeted on image classification, only work in a simplified scenario by assuming all new data belong to new tasks, which is not realistic if the...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: He, Jiangpeng, Zhu, Fengqing
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Unsupervised continual learning aims to learn new tasks incrementally without requiring human annotations. However, most existing methods, especially those targeted on image classification, only work in a simplified scenario by assuming all new data belong to new tasks, which is not realistic if the class labels are not provided. Therefore, to perform unsupervised continual learning in real life applications, an out-of-distribution detector is required at beginning to identify whether each new data corresponds to a new task or already learned tasks, which still remains under-explored yet. In this work, we formulate the problem for Out-of-distribution Detection in Unsupervised Continual Learning (OOD-UCL) with the corresponding evaluation protocol. In addition, we propose a novel OOD detection method by correcting the output bias at first and then enhancing the output confidence for in-distribution data based on task discriminativeness, which can be applied directly without modifying the learning procedures and objectives of continual learning. Our method is evaluated on CIFAR-100 dataset by following the proposed evaluation protocol and we show improved performance compared with existing OOD detection methods under the unsupervised continual learning scenario.
DOI:10.48550/arxiv.2204.05462