Domain Adaptation with a Single Vision-Language Embedding
Domain adaptation has been extensively investigated in computer vision but still requires access to target data at the training time, which might be difficult to obtain in some uncommon conditions. In this paper, we present a new framework for domain adaptation relying on a single Vision-Language (V...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Domain adaptation has been extensively investigated in computer vision but
still requires access to target data at the training time, which might be
difficult to obtain in some uncommon conditions. In this paper, we present a
new framework for domain adaptation relying on a single Vision-Language (VL)
latent embedding instead of full target data. First, leveraging a contrastive
language-image pre-training model (CLIP), we propose prompt/photo-driven
instance normalization (PIN). PIN is a feature augmentation method that mines
multiple visual styles using a single target VL latent embedding, by optimizing
affine transformations of low-level source features. The VL embedding can come
from a language prompt describing the target domain, a partially optimized
language prompt, or a single unlabeled target image. Second, we show that these
mined styles (i.e., augmentations) can be used for zero-shot (i.e.,
target-free) and one-shot unsupervised domain adaptation. Experiments on
semantic segmentation demonstrate the effectiveness of the proposed method,
which outperforms relevant baselines in the zero-shot and one-shot settings. |
---|---|
DOI: | 10.48550/arxiv.2410.21361 |