Revisiting Self-Supervised Visual Representation Learning
Unsupervised visual representation learning remains a largely unsolved problem in computer vision research. Among a big body of recently proposed approaches for unsupervised learning of visual representations, a class of self-supervised techniques achieves superior performance on many challenging be...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Unsupervised visual representation learning remains a largely unsolved
problem in computer vision research. Among a big body of recently proposed
approaches for unsupervised learning of visual representations, a class of
self-supervised techniques achieves superior performance on many challenging
benchmarks. A large number of the pretext tasks for self-supervised learning
have been studied, but other important aspects, such as the choice of
convolutional neural networks (CNN), has not received equal attention.
Therefore, we revisit numerous previously proposed self-supervised models,
conduct a thorough large scale study and, as a result, uncover multiple crucial
insights. We challenge a number of common practices in selfsupervised visual
representation learning and observe that standard recipes for CNN design do not
always translate to self-supervised representation learning. As part of our
study, we drastically boost the performance of previously proposed techniques
and outperform previously published state-of-the-art results by a large margin. |
---|---|
DOI: | 10.48550/arxiv.1901.09005 |