ATPrompt: Textual Prompt Learning with Embedded Attributes
Textual-based prompt learning methods primarily employ multiple learnable soft prompts and hard class tokens in a cascading manner as text prompt inputs, aiming to align image and text (category) spaces for downstream tasks. However, current training is restricted to aligning images with predefined...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Textual-based prompt learning methods primarily employ multiple learnable
soft prompts and hard class tokens in a cascading manner as text prompt inputs,
aiming to align image and text (category) spaces for downstream tasks. However,
current training is restricted to aligning images with predefined known
categories and cannot be associated with unknown categories. In this work, we
propose utilizing universal attributes as a bridge to enhance the alignment
between images and unknown categories. Specifically, we introduce an
Attribute-embedded Textual Prompt learning method for vision-language models,
named ATPrompt. This approach expands the learning space of soft prompts from
the original one-dimensional category level into the multi-dimensional
attribute level by incorporating multiple universal attribute tokens into the
learnable soft prompts. Through this modification, we transform the text prompt
from a category-centric form to an attribute-category hybrid form. To finalize
the attributes for downstream tasks, we propose a differentiable attribute
search method that learns to identify representative and suitable attributes
from a candidate pool summarized by a large language model. As an easy-to-use
plug-in technique, ATPrompt can seamlessly replace the existing prompt format
of textual-based methods, offering general improvements at a negligible
computational cost. Extensive experiments on 11 datasets demonstrate the
effectiveness of our method. |
---|---|
DOI: | 10.48550/arxiv.2412.09442 |