Instance-Wise Adaptive Tuning and Caching for Vision-Language Models
Large-scale vision-language models (LVLMs) pretrained on massive image-text pairs have achieved remarkable success in visual representations. However, existing paradigms to transfer LVLMs to downstream tasks encounter two primary challenges. Firstly, the text features remain fixed after being calcul...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Large-scale vision-language models (LVLMs) pretrained on massive image-text
pairs have achieved remarkable success in visual representations. However,
existing paradigms to transfer LVLMs to downstream tasks encounter two primary
challenges. Firstly, the text features remain fixed after being calculated and
cannot be adjusted according to image features, which decreases the model's
adaptability. Secondly, the model's output solely depends on the similarity
between the text and image features, leading to excessive reliance on LVLMs. To
address these two challenges, we introduce a novel two-branch model named the
Instance-Wise Adaptive Tuning and Caching (ATC). Specifically, one branch
implements our proposed ConditionNet, which guides image features to form an
adaptive textual cache that adjusts based on image features, achieving
instance-wise inference and improving the model's adaptability. The other
branch introduces the similarities between images and incorporates a learnable
visual cache, designed to decouple new and previous knowledge, allowing the
model to acquire new knowledge while preserving prior knowledge. The model's
output is jointly determined by the two branches, thus overcoming the
limitations of existing methods that rely solely on LVLMs. Additionally, our
method requires limited computing resources to tune parameters, yet outperforms
existing methods on 11 benchmark datasets. |
---|---|
DOI: | 10.48550/arxiv.2307.15983 |