Domain generalization by class-aware negative sampling-based contrastive learning

When faced with the issue of different feature distribution between training and test data, the test data may differ in style and background from the training data due to the collection sources or privacy protection. That is, the transfer generalization problem. Contrastive learning, which is curren...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:AI open 2022, Vol.3, p.200-207
Hauptverfasser: Xie, Mengwei, Zhao, Suyun, Chen, Hong, Li, Cuiping
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:When faced with the issue of different feature distribution between training and test data, the test data may differ in style and background from the training data due to the collection sources or privacy protection. That is, the transfer generalization problem. Contrastive learning, which is currently the most successful unsupervised learning method, provides good generalization performance for the various distributions of data and can use labeled data more effectively without overfitting. This study demonstrates how contrast can enhance a model’s ability to generalize, how joint contrastive learning and supervised learning can strengthen one another, and how this approach can be broadly used in various disciplines. •The present work solves domain generalization by a joint contrastive learning and adversarial learning.•The model adopts contrastive learning by the negative instances sampled from different classes instead of the random negative sampling strategy.•The theoretical analysis is provided based on the class-aware negative-sampling contrastive learning.
ISSN:2666-6510
2666-6510
DOI:10.1016/j.aiopen.2022.11.004