Contrastive Learning with Negative Sampling Correction
As one of the most effective self-supervised representation learning methods, contrastive learning (CL) relies on multiple negative pairs to contrast against each positive pair. In the standard practice of contrastive learning, data augmentation methods are utilized to generate both positive and neg...
Gespeichert in:
Hauptverfasser: | , , , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | As one of the most effective self-supervised representation learning methods,
contrastive learning (CL) relies on multiple negative pairs to contrast against
each positive pair. In the standard practice of contrastive learning, data
augmentation methods are utilized to generate both positive and negative pairs.
While existing works have been focusing on improving the positive sampling, the
negative sampling process is often overlooked. In fact, the generated negative
samples are often polluted by positive samples, which leads to a biased loss
and performance degradation. To correct the negative sampling bias, we propose
a novel contrastive learning method named Positive-Unlabeled Contrastive
Learning (PUCL). PUCL treats the generated negative samples as unlabeled
samples and uses information from positive samples to correct bias in
contrastive loss. We prove that the corrected loss used in PUCL only incurs a
negligible bias compared to the unbiased contrastive loss. PUCL can be applied
to general contrastive learning problems and outperforms state-of-the-art
methods on various image and graph classification tasks. The code of PUCL is in
the supplementary file. |
---|---|
DOI: | 10.48550/arxiv.2401.08690 |