Source-Free Domain Adaptation with Diffusion-Guided Source Data Generation
This paper introduces a novel approach to leverage the generalizability of Diffusion Models for Source-Free Domain Adaptation (DM-SFDA). Our proposed DMSFDA method involves fine-tuning a pre-trained text-to-image diffusion model to generate source domain images using features from the target images...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | This paper introduces a novel approach to leverage the generalizability of
Diffusion Models for Source-Free Domain Adaptation (DM-SFDA). Our proposed
DMSFDA method involves fine-tuning a pre-trained text-to-image diffusion model
to generate source domain images using features from the target images to guide
the diffusion process. Specifically, the pre-trained diffusion model is
fine-tuned to generate source samples that minimize entropy and maximize
confidence for the pre-trained source model. We then use a diffusion
model-based image mixup strategy to bridge the domain gap between the source
and target domains. We validate our approach through comprehensive experiments
across a range of datasets, including Office-31, Office-Home, and VisDA. The
results demonstrate significant improvements in SFDA performance, highlighting
the potential of diffusion models in generating contextually relevant,
domain-specific images. |
---|---|
DOI: | 10.48550/arxiv.2402.04929 |