Margin-Based Adversarial Joint Alignment Domain Adaptation
Domain adaptation aims to transfer the knowledge learned from a labeled source domain to an unlabeled target domain, which has different data distribution with the source domain. Most of the existing methods focus on aligning the data distribution between the source and target domains but ignore the...
Gespeichert in:
Veröffentlicht in: | IEEE transactions on circuits and systems for video technology 2022-04, Vol.32 (4), p.2057-2067 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Domain adaptation aims to transfer the knowledge learned from a labeled source domain to an unlabeled target domain, which has different data distribution with the source domain. Most of the existing methods focus on aligning the data distribution between the source and target domains but ignore the discrimination of the feature space among categories, leading the samples close to the decision boundary to be misclassified easily. To address the above issue, we propose a Margin-based Adversarial Joint Alignment (MAJA) to constrain the feature spaces of source and target domains to be aligned and discriminative. The proposed MAJA consists of two components: joint alignment module and margin-based generative module. The joint alignment module is proposed to align the source and target feature spaces by considering the joint distribution of features and labels. Therefore, the embedding features and the corresponding labels treated as pair data are applied for domain alignment. Furthermore, the margin-based generative module is proposed to boost the discrimination of the feature space, i.e., make all samples as far away from the decision boundary as possible. The margin-based generative module first employs the Generative Adversarial Networks (GAN) to generate a lot of fake images for each category, then applies the adversarial learning to enlarge and reduce the category margin for the true images and generated fake images, respectively. The evaluations on three benchmarks, e.g., small image datasets, VisDA-2017, and Office-31, verify the effectiveness of the proposed method. |
---|---|
ISSN: | 1051-8215 1558-2205 |
DOI: | 10.1109/TCSVT.2021.3081729 |