Learning to See by Looking at Noise
Current vision systems are trained on huge datasets, and these datasets come with costs: curation is expensive, they inherit human biases, and there are concerns over privacy and usage rights. To counter these costs, interest has surged in learning from cheaper data sources, such as unlabeled images...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Current vision systems are trained on huge datasets, and these datasets come
with costs: curation is expensive, they inherit human biases, and there are
concerns over privacy and usage rights. To counter these costs, interest has
surged in learning from cheaper data sources, such as unlabeled images. In this
paper we go a step further and ask if we can do away with real image datasets
entirely, instead learning from noise processes. We investigate a suite of
image generation models that produce images from simple random processes. These
are then used as training data for a visual representation learner with a
contrastive loss. We study two types of noise processes, statistical image
models and deep generative models under different random initializations. Our
findings show that it is important for the noise to capture certain structural
properties of real data but that good performance can be achieved even with
processes that are far from realistic. We also find that diversity is a key
property to learn good representations. Datasets, models, and code are
available at https://mbaradad.github.io/learning_with_noise. |
---|---|
DOI: | 10.48550/arxiv.2106.05963 |