Memory Efficient Meta-Learning with Large Images
35th Conference on Neural Information Processing Systems (NeurIPS 2021) Meta learning approaches to few-shot classification are computationally efficient at test time, requiring just a few optimization steps or single forward pass to learn a new task, but they remain highly memory-intensive to train...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | 35th Conference on Neural Information Processing Systems (NeurIPS
2021) Meta learning approaches to few-shot classification are computationally
efficient at test time, requiring just a few optimization steps or single
forward pass to learn a new task, but they remain highly memory-intensive to
train. This limitation arises because a task's entire support set, which can
contain up to 1000 images, must be processed before an optimization step can be
taken. Harnessing the performance gains offered by large images thus requires
either parallelizing the meta-learner across multiple GPUs, which may not be
available, or trade-offs between task and image size when memory constraints
apply. We improve on both options by proposing LITE, a general and memory
efficient episodic training scheme that enables meta-training on large tasks
composed of large images on a single GPU. We achieve this by observing that the
gradients for a task can be decomposed into a sum of gradients over the task's
training images. This enables us to perform a forward pass on a task's entire
training set but realize significant memory savings by back-propagating only a
random subset of these images which we show is an unbiased approximation of the
full gradient. We use LITE to train meta-learners and demonstrate new
state-of-the-art accuracy on the real-world ORBIT benchmark and 3 of the 4
parts of the challenging VTAB+MD benchmark relative to leading meta-learners.
LITE also enables meta-learners to be competitive with transfer learning
approaches but at a fraction of the test-time computational cost, thus serving
as a counterpoint to the recent narrative that transfer learning is all you
need for few-shot classification. |
---|---|
DOI: | 10.48550/arxiv.2107.01105 |