CroCo: Self-Supervised Pre-training for 3D Vision Tasks by Cross-View Completion
Masked Image Modeling (MIM) has recently been established as a potent pre-training paradigm. A pretext task is constructed by masking patches in an input image, and this masked content is then predicted by a neural network using visible patches as sole input. This pre-training leads to state-of-the-...
Gespeichert in:
Hauptverfasser: | , , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Masked Image Modeling (MIM) has recently been established as a potent
pre-training paradigm. A pretext task is constructed by masking patches in an
input image, and this masked content is then predicted by a neural network
using visible patches as sole input. This pre-training leads to
state-of-the-art performance when finetuned for high-level semantic tasks, e.g.
image classification and object detection. In this paper we instead seek to
learn representations that transfer well to a wide variety of 3D vision and
lower-level geometric downstream tasks, such as depth prediction or optical
flow estimation. Inspired by MIM, we propose an unsupervised representation
learning task trained from pairs of images showing the same scene from
different viewpoints. More precisely, we propose the pretext task of cross-view
completion where the first input image is partially masked, and this masked
content has to be reconstructed from the visible content and the second image.
In single-view MIM, the masked content often cannot be inferred precisely from
the visible portion only, so the model learns to act as a prior influenced by
high-level semantics. In contrast, this ambiguity can be resolved with
cross-view completion from the second unmasked image, on the condition that the
model is able to understand the spatial relationship between the two images.
Our experiments show that our pretext task leads to significantly improved
performance for monocular 3D vision downstream tasks such as depth estimation.
In addition, our model can be directly applied to binocular downstream tasks
like optical flow or relative camera pose estimation, for which we obtain
competitive results without bells and whistles, i.e., using a generic
architecture without any task-specific design. |
---|---|
DOI: | 10.48550/arxiv.2210.10716 |