Bad Students Make Great Teachers: Active Learning Accelerates Large-Scale Visual Understanding
Power-law scaling indicates that large-scale training with uniform sampling is prohibitively slow. Active learning methods aim to increase data efficiency by prioritizing learning on the most relevant examples. Despite their appeal, these methods have yet to be widely adopted since no one algorithm...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Power-law scaling indicates that large-scale training with uniform sampling
is prohibitively slow. Active learning methods aim to increase data efficiency
by prioritizing learning on the most relevant examples. Despite their appeal,
these methods have yet to be widely adopted since no one algorithm has been
shown to a) generalize across models and tasks b) scale to large datasets and
c) yield overall FLOP savings when accounting for the overhead of data
selection. In this work we propose a method which satisfies these three
properties, leveraging small, cheap proxy models to estimate "learnability"
scores for datapoints, which are used to prioritize data for the training of
much larger models. As a result, our models require 46% and 51% fewer training
updates and up to 25% less total computation to reach the same performance as
uniformly trained visual classifiers on JFT and multimodal models on ALIGN.
Finally, we find our data-prioritization scheme to be complementary with recent
data-curation and learning objectives, yielding a new state-of-the-art in
several multimodal transfer tasks. |
---|---|
DOI: | 10.48550/arxiv.2312.05328 |