TL-ADA: Transferable Loss-based Active Domain Adaptation

The field of Active Domain Adaptation (ADA) has been investigating ways to close the performance gap between supervised and unsupervised learning settings. Previous ADA research has primarily focused on query selection, but there has been little examination of how to effectively train newly labeled...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Neural networks 2023-04, Vol.161, p.670-681
Hauptverfasser: Han, Kyeongtak, Kim, Youngeun, Han, Dongyoon, Lee, Hojun, Hong, Sungeun
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:The field of Active Domain Adaptation (ADA) has been investigating ways to close the performance gap between supervised and unsupervised learning settings. Previous ADA research has primarily focused on query selection, but there has been little examination of how to effectively train newly labeled target samples using both labeled source samples and unlabeled target samples. In this study, we present a novel Transferable Loss-based ADA (TL-ADA) framework. Our approach is inspired by loss-based query selection, which has shown promising results in active learning. However, directly applying loss-based query selection to the ADA scenario leads to a buildup of high-loss samples that do not contribute to the model due to transferability issues and low diversity. To address these challenges, we propose a transferable doubly nested loss, which incorporates target pseudo labels and a domain adversarial loss. Our TL-ADA framework trains the model sequentially, considering both the domain type (source/target) and the availability of labels (labeled/unlabeled). Additionally, we encourage the pseudo labels to have low self-entropy and diverse class distributions to improve their reliability. Experiments on several benchmark datasets demonstrate that our TL-ADA model outperforms previous ADA methods, and in-depth analysis supports the effectiveness of our proposed approach.
ISSN:0893-6080
1879-2782
DOI:10.1016/j.neunet.2023.02.004