Instance Paradigm Contrastive Learning for Domain Generalization

Domain Generalization (DG) aims to develop models that can learn from data in source domains and generalize to unseen target domains. Recently, some domain generalization algorithms have emerged, but most of them were designed with complex modules. Among all the prior methods under DG settings, cont...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on circuits and systems for video technology 2024-02, Vol.34 (2), p.1032-1042
Hauptverfasser: Chen, Zining, Wang, Weiqiu, Zhao, Zhicheng, Su, Fei, Men, Aidong, Dong, Yuan
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Domain Generalization (DG) aims to develop models that can learn from data in source domains and generalize to unseen target domains. Recently, some domain generalization algorithms have emerged, but most of them were designed with complex modules. Among all the prior methods under DG settings, contrastive learning has become a promising solution for simplicity and efficiency. However, existing contrastive learning neglects distribution shifts that causes severe domain confusions. In this paper, we propose an instance paradigm contrastive learning framework, introducing contrast between original features and novel paradigms to alleviate domain-specific distractions. And then we explore hard-pair information, an essential factor in contrastive learning, based on domain label and feature similarity. Moreover, to produce domain-invariant instance paradigms, we generate multiple views of the original images and design a novel channel-wise attention mechanism to dynamically combine features from all the views. Furthermore, a test-time feature integration module is designed to mimic the paradigms during the training process to improve generalization ability. Extensive experiments show that our method achieves state-of-the-art performance. The proposed algorithm can also serve as a plug-and-play module which improves performance of existing methods with a relatively large margin.
ISSN:1051-8215
1558-2205
DOI:10.1109/TCSVT.2023.3289201