Sample separation and domain alignment complementary learning mechanism for open set domain adaptation
Open Set Domain Adaptation (OSDA) reduces domain shift and semantic shift by dividing the known/unknown target samples and aligning the known target samples with the source samples. Unfortunately, either separating or aligning first will cause the negative shift to the other side. Moreover, numerous...
Gespeichert in:
Veröffentlicht in: | Applied intelligence (Dordrecht, Netherlands) Netherlands), 2023-08, Vol.53 (15), p.18790-18805 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Open Set Domain Adaptation (OSDA) reduces domain shift and semantic shift by dividing the known/unknown target samples and aligning the known target samples with the source samples. Unfortunately, either separating or aligning first will cause the negative shift to the other side. Moreover, numerous methods do not utilize the sample knowledge of the target domain. In this study, a new method is put forward to address the issue called Sample Separation and Domain Alignment Complementary Learning Mechanism (CLM) for Open Set Domain Adaptation. Specifically, this work proposes a complementary learning mechanism that jointly trains two complementary learning structures including Sample Separation Module (SSMod) and Domain Alignment Module (DAMod). SSMod and DAMod are performed simultaneously, exchanging training experiences during the learning process using the self-supervised pseudo-labeling method. In addition, we introduce a novel sample separation method, which not only facilitates the distinction between known and unknown classes of target samples but also enriches the semantic knowledge of the model by employing the unlabeled data in an unsupervised manner. Extensive experiments demonstrate that our method realizes significant performance on four standard Digits, Office-31, Office-Home and VisDA-2017 benchmarks. For example, CML achieves 89.2% accuracy of HOS on Office-31 and increases by 1.8% than the second best method. |
---|---|
ISSN: | 0924-669X 1573-7497 |
DOI: | 10.1007/s10489-022-04262-0 |