The Solution for the 5th GCAIAC Zero-shot Referring Expression Comprehension Challenge
This report presents a solution for the zero-shot referring expression comprehension task. Visual-language multimodal base models (such as CLIP, SAM) have gained significant attention in recent years as a cornerstone of mainstream research. One of the key applications of multimodal base models lies...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | This report presents a solution for the zero-shot referring expression
comprehension task. Visual-language multimodal base models (such as CLIP, SAM)
have gained significant attention in recent years as a cornerstone of
mainstream research. One of the key applications of multimodal base models lies
in their ability to generalize to zero-shot downstream tasks. Unlike
traditional referring expression comprehension, zero-shot referring expression
comprehension aims to apply pre-trained visual-language models directly to the
task without specific training. Recent studies have enhanced the zero-shot
performance of multimodal base models in referring expression comprehension
tasks by introducing visual prompts. To address the zero-shot referring
expression comprehension challenge, we introduced a combination of visual
prompts and considered the influence of textual prompts, employing joint
prediction tailored to the data characteristics. Ultimately, our approach
achieved accuracy rates of 84.825 on the A leaderboard and 71.460 on the B
leaderboard, securing the first position. |
---|---|
DOI: | 10.48550/arxiv.2407.04998 |