Curriculum Guided Domain Adaptation in the Dark
Addressing the rising concerns of privacy and security, domain adaptation in the dark aims to adapt a black-box source trained model to an unlabeled target domain without access to any source data or source model parameters. The need for domain adaptation of black-box predictors becomes even more pr...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Addressing the rising concerns of privacy and security, domain adaptation in
the dark aims to adapt a black-box source trained model to an unlabeled target
domain without access to any source data or source model parameters. The need
for domain adaptation of black-box predictors becomes even more pronounced to
protect intellectual property as deep learning based solutions are becoming
increasingly commercialized. Current methods distill noisy predictions on the
target data obtained from the source model to the target model, and/or separate
clean/noisy target samples before adapting using traditional noisy label
learning algorithms. However, these methods do not utilize the easy-to-hard
learning nature of the clean/noisy data splits. Also, none of the existing
methods are end-to-end, and require a separate fine-tuning stage and an initial
warmup stage. In this work, we present Curriculum Adaptation for Black-Box
(CABB) which provides a curriculum guided adaptation approach to gradually
train the target model, first on target data with high confidence (clean)
labels, and later on target data with noisy labels. CABB utilizes
Jensen-Shannon divergence as a better criterion for clean-noisy sample
separation, compared to the traditional criterion of cross entropy loss. Our
method utilizes co-training of a dual-branch network to suppress error
accumulation resulting from confirmation bias. The proposed approach is
end-to-end trainable and does not require any extra finetuning stage, unlike
existing methods. Empirical results on standard domain adaptation datasets show
that CABB outperforms existing state-of-the-art black-box DA models and is
comparable to white-box domain adaptation models. |
---|---|
DOI: | 10.48550/arxiv.2308.00956 |