Quad-Channel Contrastive Prototype Networks for Open-Set Recognition in Domain-Specific Tasks

A traditional deep neural network-based classifier assumes that only training classes appear during testing in closed-world settings. In most real-world applications, an open-set environment is more realistic than a conventional approach where unseen classes are potentially present during the model&...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE access 2023-01, Vol.11, p.1-1
Hauptverfasser: Alfarisy, Gusti Ahmad Fanshuri, Malik, Owais Ahmed, Hong, Ong Wee
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:A traditional deep neural network-based classifier assumes that only training classes appear during testing in closed-world settings. In most real-world applications, an open-set environment is more realistic than a conventional approach where unseen classes are potentially present during the model's lifetime. Open-set recognition (OSR) provides the model with the capability to address this issue by reducing open-set risk, in which unknown classes could be recognized as known classes. Unfortunately, many proposed open-set techniques evaluate performance using "toy" datasets and do not consider transfer learning, which has become common practice in deriving a strong performance from deep learning models. We propose a quad-channel contrastive prototype network (QC-CPN) using quad-channel views of the input with contrastive prototype loss for real-world applications. These open-set techniques also require the tuning of new hyperparameters to justify their performance, so we first employ evolutionary simulated annealing (EvoSA) to find good hyperparameters and evaluate their performance with our proposed approach. The comparison results show that QC-CPN effectively outperforms other state-of-the-art techniques in rejecting unseen classes in a domain-specific dataset using the same backbone (MNetV3-Large) and could become a strong baseline for future study.
ISSN:2169-3536
2169-3536
DOI:10.1109/ACCESS.2023.3275743