Reducing bias to source samples for unsupervised domain adaptation

Unsupervised Domain Adaptation (UDA) makes predictions for the target domain data while labels are only available in the source domain. Lots of works in UDA focus on finding a common representation of the two domains via domain alignment, assuming that a classifier trained in the source domain can b...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Neural networks 2021-09, Vol.141, p.61-71
Hauptverfasser: Ye, Yalan, Huang, Ziwei, Pan, Tongjie, Li, Jingjing, Shen, Heng Tao
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Unsupervised Domain Adaptation (UDA) makes predictions for the target domain data while labels are only available in the source domain. Lots of works in UDA focus on finding a common representation of the two domains via domain alignment, assuming that a classifier trained in the source domain can be generalized well to the target domain. Thus, most existing UDA methods only consider minimizing the domain discrepancy without enforcing any constraint on the classifier. However, due to the uniqueness of each domain, it is difficult to achieve a perfect common representation, especially when there is low similarity between the source domain and the target domain. As a consequence, the classifier is biased to the source domain features and makes incorrect predictions on the target domain. To address this issue, we propose a novel approach named reducing bias to source samples for unsupervised domain adaptation (RBDA) by jointly matching the distribution of the two domains and reducing the classifier’s bias to source samples. Specifically, RBDA first conditions the adversarial networks with the cross-covariance of learned features and classifier predictions to match the distribution of two domains. Then to reduce the classifier’s bias to source samples, RBDA is designed with three effective mechanisms: a mean teacher model to guide the training of the original model, a regularization term to regularize the model and an improved cross-entropy loss for better supervised information learning. Comprehensive experiments on several open benchmarks demonstrate that RBDA achieves state-of-the-art results, which show its effectiveness for unsupervised domain adaptation scenarios. •A novel method named RBDA is proposed for domain adaptation.•RBDA focuses on reducing the classifier’s bias to source samples.•Comprehensive experiments demonstrate the effectiveness of RBDA.
ISSN:0893-6080
1879-2782
DOI:10.1016/j.neunet.2021.03.021