Diversify, Don't Fine-Tune: Scaling Up Visual Recognition Training with Synthetic Images
Recent advances in generative deep learning have enabled the creation of high-quality synthetic images in text-to-image generation. Prior work shows that fine-tuning a pretrained diffusion model on ImageNet and generating synthetic training images from the finetuned model can enhance an ImageNet cla...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Recent advances in generative deep learning have enabled the creation of
high-quality synthetic images in text-to-image generation. Prior work shows
that fine-tuning a pretrained diffusion model on ImageNet and generating
synthetic training images from the finetuned model can enhance an ImageNet
classifier's performance. However, performance degrades as synthetic images
outnumber real ones. In this paper, we explore whether generative fine-tuning
is essential for this improvement and whether it is possible to further scale
up training using more synthetic data. We present a new framework leveraging
off-the-shelf generative models to generate synthetic training images,
addressing multiple challenges: class name ambiguity, lack of diversity in
naive prompts, and domain shifts. Specifically, we leverage large language
models (LLMs) and CLIP to resolve class name ambiguity. To diversify images, we
propose contextualized diversification (CD) and stylized diversification (SD)
methods, also prompted by LLMs. Finally, to mitigate domain shifts, we leverage
domain adaptation techniques with auxiliary batch normalization for synthetic
images. Our framework consistently enhances recognition model performance with
more synthetic data, up to 6x of original ImageNet size showcasing the
potential of synthetic data for improved recognition models and strong
out-of-domain generalization. |
---|---|
DOI: | 10.48550/arxiv.2312.02253 |