Self-Supervised Visual Representations Learning by Contrastive Mask Prediction
Advanced self-supervised visual representation learning methods rely on the instance discrimination (ID) pretext task. We point out that the ID task has an implicit semantic consistency (SC) assumption, which may not hold in unconstrained datasets. In this paper, we propose a novel contrastive mask...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Advanced self-supervised visual representation learning methods rely on the
instance discrimination (ID) pretext task. We point out that the ID task has an
implicit semantic consistency (SC) assumption, which may not hold in
unconstrained datasets. In this paper, we propose a novel contrastive mask
prediction (CMP) task for visual representation learning and design a mask
contrast (MaskCo) framework to implement the idea. MaskCo contrasts
region-level features instead of view-level features, which makes it possible
to identify the positive sample without any assumptions. To solve the domain
gap between masked and unmasked features, we design a dedicated mask prediction
head in MaskCo. This module is shown to be the key to the success of the CMP.
We evaluated MaskCo on training datasets beyond ImageNet and compare its
performance with MoCo V2. Results show that MaskCo achieves comparable
performance with MoCo V2 using ImageNet training dataset, but demonstrates a
stronger performance across a range of downstream tasks when COCO or Conceptual
Captions are used for training. MaskCo provides a promising alternative to the
ID-based methods for self-supervised learning in the wild. |
---|---|
DOI: | 10.48550/arxiv.2108.07954 |