DomCLP: Domain-wise Contrastive Learning with Prototype Mixup for Unsupervised Domain Generalization
Self-supervised learning (SSL) methods based on the instance discrimination tasks with InfoNCE have achieved remarkable success. Despite their success, SSL models often struggle to generate effective representations for unseen-domain data. To address this issue, research on unsupervised domain gener...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Self-supervised learning (SSL) methods based on the instance discrimination
tasks with InfoNCE have achieved remarkable success. Despite their success, SSL
models often struggle to generate effective representations for unseen-domain
data. To address this issue, research on unsupervised domain generalization
(UDG), which aims to develop SSL models that can generate domain-irrelevant
features, has been conducted. Most UDG approaches utilize contrastive learning
with InfoNCE to generate representations, and perform feature alignment based
on strong assumptions to generalize domain-irrelevant common features from
multi-source domains. However, existing methods that rely on instance
discrimination tasks are not effective at extracting domain-irrelevant common
features. This leads to the suppression of domain-irrelevant common features
and the amplification of domain-relevant features, thereby hindering domain
generalization. Furthermore, strong assumptions underlying feature alignment
can lead to biased feature learning, reducing the diversity of common features.
In this paper, we propose a novel approach, DomCLP, Domain-wise Contrastive
Learning with Prototype Mixup. We explore how InfoNCE suppresses
domain-irrelevant common features and amplifies domain-relevant features. Based
on this analysis, we propose Domain-wise Contrastive Learning (DCon) to enhance
domain-irrelevant common features. We also propose Prototype Mixup Learning
(PMix) to generalize domain-irrelevant common features across multiple domains
without relying on strong assumptions. The proposed method consistently
outperforms state-of-the-art methods on the PACS and DomainNet datasets across
various label fractions, showing significant improvements. Our code will be
released. Our project page is available at https://github.com/jinsuby/DomCLP. |
---|---|
DOI: | 10.48550/arxiv.2412.09074 |