ClusterFit: Improving Generalization of Visual Representations
Pre-training convolutional neural networks with weakly-supervised and self-supervised strategies is becoming increasingly popular for several computer vision tasks. However, due to the lack of strong discriminative signals, these learned representations may overfit to the pre-training objective (e.g...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Pre-training convolutional neural networks with weakly-supervised and
self-supervised strategies is becoming increasingly popular for several
computer vision tasks. However, due to the lack of strong discriminative
signals, these learned representations may overfit to the pre-training
objective (e.g., hashtag prediction) and not generalize well to downstream
tasks. In this work, we present a simple strategy - ClusterFit (CF) to improve
the robustness of the visual representations learned during pre-training. Given
a dataset, we (a) cluster its features extracted from a pre-trained network
using k-means and (b) re-train a new network from scratch on this dataset using
cluster assignments as pseudo-labels. We empirically show that clustering helps
reduce the pre-training task-specific information from the extracted features
thereby minimizing overfitting to the same. Our approach is extensible to
different pre-training frameworks -- weak- and self-supervised, modalities --
images and videos, and pre-training tasks -- object and action classification.
Through extensive transfer learning experiments on 11 different target datasets
of varied vocabularies and granularities, we show that ClusterFit significantly
improves the representation quality compared to the state-of-the-art
large-scale (millions / billions) weakly-supervised image and video models and
self-supervised image models. |
---|---|
DOI: | 10.48550/arxiv.1912.03330 |