Image retrieval outperforms diffusion models on data augmentation
Many approaches have been proposed to use diffusion models to augment training datasets for downstream tasks, such as classification. However, diffusion models are themselves trained on large datasets, often with noisy annotations, and it remains an open question to which extent these models contrib...
Gespeichert in:
Hauptverfasser: | , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Many approaches have been proposed to use diffusion models to augment
training datasets for downstream tasks, such as classification. However,
diffusion models are themselves trained on large datasets, often with noisy
annotations, and it remains an open question to which extent these models
contribute to downstream classification performance. In particular, it remains
unclear if they generalize enough to improve over directly using the additional
data of their pre-training process for augmentation. We systematically evaluate
a range of existing methods to generate images from diffusion models and study
new extensions to assess their benefit for data augmentation. Personalizing
diffusion models towards the target data outperforms simpler prompting
strategies. However, using the pre-training data of the diffusion model alone,
via a simple nearest-neighbor retrieval procedure, leads to even stronger
downstream performance. Our study explores the potential of diffusion models in
generating new training data, and surprisingly finds that these sophisticated
models are not yet able to beat a simple and strong image retrieval baseline on
simple downstream vision tasks. |
---|---|
DOI: | 10.48550/arxiv.2304.10253 |