EVF-SAM: Early Vision-Language Fusion for Text-Prompted Segment Anything Model
Segment Anything Model (SAM) has attracted widespread attention for its superior interactive segmentation capabilities with visual prompts while lacking further exploration of text prompts. In this paper, we empirically investigate what text prompt encoders (e.g., CLIP or LLM) are good for adapting...
Gespeichert in:
Hauptverfasser: | , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Segment Anything Model (SAM) has attracted widespread attention for its
superior interactive segmentation capabilities with visual prompts while
lacking further exploration of text prompts. In this paper, we empirically
investigate what text prompt encoders (e.g., CLIP or LLM) are good for adapting
SAM for referring expression segmentation and introduce the Early
Vision-language Fusion-based SAM (EVF-SAM). EVF-SAM is a simple yet effective
referring segmentation method which exploits multimodal prompts (i.e., image
and text) and comprises a pre-trained vision-language model to generate
referring prompts and a SAM model for segmentation. Surprisingly, we observe
that: (1) multimodal prompts and (2) vision-language models with early fusion
(e.g., BEIT-3) are beneficial for prompting SAM for accurate referring
segmentation. Our experiments show that the proposed EVF-SAM based on BEIT-3
can obtain state-of-the-art performance on RefCOCO/+/g for referring expression
segmentation and demonstrate the superiority of prompting SAM with early
vision-language fusion. In addition, the proposed EVF-SAM with 1.32B parameters
achieves remarkably higher performance while reducing nearly 82% of parameters
compared to previous SAM methods based on large multimodal models. |
---|---|
DOI: | 10.48550/arxiv.2406.20076 |