Combining Contrastive Learning with Auto-Encoder for Out-of-Distribution Detection

Reliability and robustness are fundamental requisites for the successful integration of deep-learning models into real-world applications. Deployed models must exhibit an awareness of their limitations, necessitating the ability to discern out-of-distribution (OOD) data and prompt human intervention...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Applied sciences 2023-12, Vol.13 (23), p.12930
Hauptverfasser: Luo, Dawei, Zhou, Heng, Bae, Joonsoo, Yun, Bom
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Reliability and robustness are fundamental requisites for the successful integration of deep-learning models into real-world applications. Deployed models must exhibit an awareness of their limitations, necessitating the ability to discern out-of-distribution (OOD) data and prompt human intervention, a critical competency. While several frameworks for OOD detection have been introduced and achieved remarkable results, most state-of-the-art (SOTA) models rely on supervised learning with annotated data for their training. However, acquiring labeled data can be a demanding, time-consuming or, in some cases, an infeasible task. Consequently, unsupervised learning has gained substantial traction and has made noteworthy advancements. It empowers models to undergo training solely on unlabeled data while still achieving comparable or even superior performance compared to supervised alternatives. Among the array of unsupervised methods, contrastive learning has asserted its effectiveness in feature extraction for a variety of downstream tasks. Conversely, auto-encoders are extensively employed to acquire indispensable representations that faithfully reconstruct input data. In this study, we introduce a novel approach that amalgamates contrastive learning with auto-encoders for OOD detection using unlabeled data. Contrastive learning diligently tightens the grouping of in-distribution data while meticulously segregating OOD data, and the auto-encoder augments the feature space with increased refinement. Within this framework, data undergo implicit classification into in-distribution and OOD categories with a notable degree of precision. Our experimental findings manifest that this method surpasses most of the existing detectors reliant on unlabeled data or even labeled data. By incorporating an auto-encoder into an unsupervised learning framework and training it on the CIFAR-100 dataset, our model enhances the detection rate of unsupervised learning methods by an average of 5.8%. Moreover, it outperforms the supervised-based OOD detector by an average margin of 11%.
ISSN:2076-3417
2076-3417
DOI:10.3390/app132312930