Black Box Few-Shot Adaptation for Vision-Language models
Vision-Language (V-L) models trained with contrastive learning to align the visual and language modalities have been shown to be strong few-shot learners. Soft prompt learning is the method of choice for few-shot downstream adaptation aiming to bridge the modality gap caused by the distribution shif...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Vision-Language (V-L) models trained with contrastive learning to align the
visual and language modalities have been shown to be strong few-shot learners.
Soft prompt learning is the method of choice for few-shot downstream adaptation
aiming to bridge the modality gap caused by the distribution shift induced by
the new domain. While parameter-efficient, prompt learning still requires
access to the model weights and can be computationally infeasible for large
models with billions of parameters. To address these shortcomings, in this
work, we describe a black-box method for V-L few-shot adaptation that (a)
operates on pre-computed image and text features and hence works without access
to the model's weights, (b) it is orders of magnitude faster at training time,
(c) it is amenable to both supervised and unsupervised training, and (d) it can
be even used to align image and text features computed from uni-modal models.
To achieve this, we propose Linear Feature Alignment (LFA), a simple linear
approach for V-L re-alignment in the target domain. LFA is initialized from a
closed-form solution to a least-squares problem and then it is iteratively
updated by minimizing a re-ranking loss. Despite its simplicity, our approach
can even surpass soft-prompt learning methods as shown by extensive experiments
on 11 image and 2 video datasets. |
---|---|
DOI: | 10.48550/arxiv.2304.01752 |