Unsupervised Domain Adaptation Enhanced by Fuzzy Prompt Learning

Unsupervised domain adaptation (UDA) addresses the challenge of distribution shift between a labeled source domain and an unlabeled target domain by utilizing knowledge from the source. Traditional UDA methods mainly focus on single-modal scenarios, either vision or language, thus, not fully explori...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on fuzzy systems 2024-07, Vol.32 (7), p.4038-4048
Hauptverfasser: Shi, Kuo, Lu, Jie, Fang, Zhen, Zhang, Guangquan
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Unsupervised domain adaptation (UDA) addresses the challenge of distribution shift between a labeled source domain and an unlabeled target domain by utilizing knowledge from the source. Traditional UDA methods mainly focus on single-modal scenarios, either vision or language, thus, not fully exploring the advantages of multimodal representations. Visionlanguage models utilize multimodal information, applying prompt learning techniques for addressing target domain tasks. Motivated by the recent advancements in pretrained visionlanguage models, this article expands the UDA framework to incorporate multimodal approaches using fuzzy techniques. The adoption of fuzzy techniques, preferred over conventional domain adaptation methods, is based on the following two key aspects: 1) the nature of prompt learning is intrinsically linked to fuzzy logic, and 2) the superior capability of fuzzy techniques in processing soft information and effectively utilizing inherent relationships both within and across domains. To this end, we propose UDA enhanced by fuzz y prompt le arning (FUZZLE), a simple and effective method for aligning the source and target domains via domain-specific prompt learning. Specifically, we introduce a novel technique to enhance prompt learning in the target domain. This method integrates fuzzy C-means clustering and a novel instance-level fuzzy vector into the prompt learning loss function, minimizing the distance between prompt cluster centers and instance prompts, thereby, enhancing the prompt learning process. In addition, we propose a Kullback-Leibler (KL) divergence-based loss function with a fuzzification factor. This function is designed to minimize the distribution discrepancy in the classification of similar cross-domain data, aligning domain-specific prompts during the training process. We contribute an in-depth analysis to understand the effectiveness of FUZZLE. Extensive experiments demonstrate that our method achieves superior performance on standard UDA benchmarks.
ISSN:1063-6706
1941-0034
DOI:10.1109/TFUZZ.2024.3389705