Heavy Labels Out! Dataset Distillation with Label Space Lightening
Dataset distillation or condensation aims to condense a large-scale training dataset into a much smaller synthetic one such that the training performance of distilled and original sets on neural networks are similar. Although the number of training samples can be reduced substantially, current state...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Dataset distillation or condensation aims to condense a large-scale training
dataset into a much smaller synthetic one such that the training performance of
distilled and original sets on neural networks are similar. Although the number
of training samples can be reduced substantially, current state-of-the-art
methods heavily rely on enormous soft labels to achieve satisfactory
performance. As a result, the required storage can be comparable even to
original datasets, especially for large-scale ones. To solve this problem,
instead of storing these heavy labels, we propose a novel label-lightening
framework termed HeLlO aiming at effective image-to-label projectors, with
which synthetic labels can be directly generated online from synthetic images.
Specifically, to construct such projectors, we leverage prior knowledge in
open-source foundation models, e.g., CLIP, and introduce a LoRA-like
fine-tuning strategy to mitigate the gap between pre-trained and target
distributions, so that original models for soft-label generation can be
distilled into a group of low-rank matrices. Moreover, an effective image
optimization method is proposed to further mitigate the potential error between
the original and distilled label generators. Extensive experiments demonstrate
that with only about 0.003% of the original storage required for a complete set
of soft labels, we achieve comparable performance to current state-of-the-art
dataset distillation methods on large-scale datasets. Our code will be
available. |
---|---|
DOI: | 10.48550/arxiv.2408.08201 |