Continual Unsupervised Domain Adaptation in Data-Constrained Environments

Domain adaptation (DA) techniques aim to overcome the domain shift between the source domain used for training and the target domain where testing takes place. However, current DA methods assume that the entire target domain is available during adaptation, which may not hold in practice. We introduc...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on artificial intelligence 2024-01, Vol.5 (1), p.167-178
Hauptverfasser: Taufique, Abu Md Niamul, Jahan, Chowdhury Sadman, Savakis, Andreas
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Domain adaptation (DA) techniques aim to overcome the domain shift between the source domain used for training and the target domain where testing takes place. However, current DA methods assume that the entire target domain is available during adaptation, which may not hold in practice. We introduce a new, data-constrained DA paradigm where unlabeled target samples are received in batches and adaptation is performed continually. We propose a novel source-free method for continual unsupervised domain adaptation (UDA) that utilizes a buffer for selective replay of previously seen samples. In our continual DA framework, we selectively mix samples from incoming batches with data stored in a buffer using buffer management strategies and use the combination to incrementally update our model. We evaluate and compare the classification performance of the continual DA approach with state-of-the-art (SOTA) DA methods based on the entire target domain. Our results on three popular DA datasets demonstrate the benefits of our method when operating in data constrained environments. We further extend our experiments to adapting over multiple target domains and our method performs favorably with the SOTA methods.
ISSN:2691-4581
2691-4581
DOI:10.1109/TAI.2022.3233791