Adapt Anything: Tailor Any Image Classifiers across Domains And Categories Using Text-to-Image Diffusion Models
We do not pursue a novel method in this paper, but aim to study if a modern text-to-image diffusion model can tailor any task-adaptive image classifier across domains and categories. Existing domain adaptive image classification works exploit both source and target data for domain alignment so as to...
Gespeichert in:
Hauptverfasser: | , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | We do not pursue a novel method in this paper, but aim to study if a modern
text-to-image diffusion model can tailor any task-adaptive image classifier
across domains and categories. Existing domain adaptive image classification
works exploit both source and target data for domain alignment so as to
transfer the knowledge learned from the labeled source data to the unlabeled
target data. However, as the development of the text-to-image diffusion model,
we wonder if the high-fidelity synthetic data from the text-to-image generator
can serve as a surrogate of the source data in real world. In this way, we do
not need to collect and annotate the source data for each domain adaptation
task in a one-for-one manner. Instead, we utilize only one off-the-shelf
text-to-image model to synthesize images with category labels derived from the
corresponding text prompts, and then leverage the surrogate data as a bridge to
transfer the knowledge embedded in the task-agnostic text-to-image generator to
the task-oriented image classifier via domain adaptation. Such a one-for-all
adaptation paradigm allows us to adapt anything in the world using only one
text-to-image generator as well as the corresponding unlabeled target data.
Extensive experiments validate the feasibility of the proposed idea, which even
surpasses the state-of-the-art domain adaptation works using the source data
collected and annotated in real world. |
---|---|
DOI: | 10.48550/arxiv.2310.16573 |