Selected confidence sample labeling for domain adaptation

Unsupervised Domain adaptation (UDA) aims to transfer knowledge from the labeled source domain to the unlabeled target domain. Recently, to achieve reliable knowledge learning, progressively labeling (PL) is proposed to select reliable target samples for training. Although PL achieves fruitful resul...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Neurocomputing (Amsterdam) 2023-10, Vol.555, p.126624, Article 126624
Hauptverfasser: Zheng, Zefeng, Teng, Shaohua, Wu, Naiqi, Teng, Luyao, Zhang, Wei, Fei, Lunke
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Unsupervised Domain adaptation (UDA) aims to transfer knowledge from the labeled source domain to the unlabeled target domain. Recently, to achieve reliable knowledge learning, progressively labeling (PL) is proposed to select reliable target samples for training. Although PL achieves fruitful results, there are two problems that limit its performance: (a) PL may neglect to filter out the uncertain samples that lie in the classification boundaries, which might select low-quality target samples for training, and lead to error accumulations; and (b) PL might overlook the consistent selection in the sample selection stage during iteration, which might result in unstable sample selection. To cope with these problems, we propose a novel method called Selected Confidence Sample Labeling (SCSL). SCSL consists of three parts: Discriminative Progressively Labeling (DPL), Consistency Strategy (CS), and Differential Learning (DL). First, DPL selects the high-confidence target samples with the maximum probability difference between the highest and the second-highest probabilities of classification. In this way, the uncertain samples are filtered out, while the qualities of the selected target samples are ensured. Second, CS includes a group of consistency strategies to make the selected high-confidence target samples near to the class-centroids. This further improves the confidence degree of the selected target samples, and ensures that the proposed model does not remove or replace the selected target samples during iteration. At last, DL is a bi-strategic training approach that applies CS and top-k fuzzy probability clustering to train the high-confidence and the remaining target samples, respectively. In doing so, all the target samples are trained simultaneously, and the generalization of the model is improved. Extensive experiments on four benchmark datasets with the comparison of several advanced algorithms demonstrate the superiority of SCSL.
ISSN:0925-2312
1872-8286
DOI:10.1016/j.neucom.2023.126624