Cross-coupled prompt learning for few-shot image recognition
Prompt learning based on large models shows great potential to reduce training time and resource costs, which has been progressively applied to visual tasks such as image recognition. Nevertheless, the existing prompt learning schemes suffer from either inadequate prompt information from a single mo...
Gespeichert in:
Veröffentlicht in: | Displays 2024-12, Vol.85, p.102862, Article 102862 |
---|---|
Hauptverfasser: | , , , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Prompt learning based on large models shows great potential to reduce training time and resource costs, which has been progressively applied to visual tasks such as image recognition. Nevertheless, the existing prompt learning schemes suffer from either inadequate prompt information from a single modality or insufficient prompt interaction between multiple modalities, resulting in low efficiency and performance. To address these limitations, we propose a Cross-Coupled Prompt Learning (CCPL) architecture, which is designed with two novel components (i.e., Cross-Coupled Prompt Generator (CCPG) module and Cross-Modal Fusion (CMF) module) to achieve efficient interaction between visual and textual prompts. Specifically, the CCPG module incorporates a cross-attention mechanism to automatically generate visual and textual prompts, each of which will be adaptively updated using the self-attention mechanism in their respective image and text encoders. Furthermore, the CMF module implements a deep fusion to reinforce the cross-modal feature interaction from the output layer with the Image–Text Matching (ITM) loss function. We conduct extensive experiments on 8 image datasets. The experimental results verify that our proposed CCPL outperforms the SOTA methods on few-shot image recognition tasks. The source code of this project is released at: https://github.com/elegantTechie/CCPL.
•To the best of our knowledge, CCPG is the first to achieve cross-modal bidirectional interaction between visual and textual prompts.•CCPG also reinforces cross-modal feature fusion between image and text embeddings, enabling stronger mutual exchange of informative representations.•To achieve cross-modal interaction, We design CCPG module for visual & textual prompts to capture key information via cross-attention mechanism.•To reinforce cross-modal feature fusion, we design a CMF module to enhance semantic consistency between image and text via ITM loss function.•Extensive experiments show CCPL surpasses single-/multi-modal prompt learning in various few-shot image recognition tasks. |
---|---|
ISSN: | 0141-9382 |
DOI: | 10.1016/j.displa.2024.102862 |