Supporting Vision-Language Model Inference with Confounder-pruning Knowledge Prompt
Vision-language models are pre-trained by aligning image-text pairs in a common space to deal with open-set visual concepts. To boost the transferability of the pre-trained models, recent works adopt fixed or learnable prompts, i.e., classification weights are synthesized from natural language descr...
Gespeichert in:
Hauptverfasser: | , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Vision-language models are pre-trained by aligning image-text pairs in a
common space to deal with open-set visual concepts. To boost the
transferability of the pre-trained models, recent works adopt fixed or
learnable prompts, i.e., classification weights are synthesized from natural
language describing task-relevant categories, to reduce the gap between tasks
in the training and test phases. However, how and what prompts can improve
inference performance remains unclear. In this paper, we explicitly clarify the
importance of including semantic information in prompts, while existing
prompting methods generate prompts without exploring the semantic information
of textual labels. Manually constructing prompts with rich semantics requires
domain expertise and is extremely time-consuming. To cope with this issue, we
propose a semantic-aware prompt learning method, namely CPKP, which retrieves
an ontological knowledge graph by treating the textual label as a query to
extract task-relevant semantic information. CPKP further introduces a
double-tier confounder-pruning procedure to refine the derived semantic
information. The graph-tier confounders are gradually identified and phased
out, inspired by the principle of Granger causality. The feature-tier
confounders are demolished by following the maximum entropy principle in
information theory. Empirically, the evaluations demonstrate the effectiveness
of CPKP, e.g., with two shots, CPKP outperforms the manual-prompt method by
4.64% and the learnable-prompt method by 1.09% on average, and the superiority
of CPKP in domain generalization compared to benchmark approaches. Our
implementation is available at https://github.com/Mowenyii/CPKP. |
---|---|
DOI: | 10.48550/arxiv.2205.11100 |