Object-Aware Query Perturbation for Cross-Modal Image-Text Retrieval
The pre-trained vision and language (V\&L) models have substantially improved the performance of cross-modal image-text retrieval. In general, however, V\&L models have limited retrieval performance for small objects because of the rough alignment between words and the small objects in the i...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | The pre-trained vision and language (V\&L) models have substantially improved
the performance of cross-modal image-text retrieval. In general, however, V\&L
models have limited retrieval performance for small objects because of the
rough alignment between words and the small objects in the image. In contrast,
it is known that human cognition is object-centric, and we pay more attention
to important objects, even if they are small. To bridge this gap between the
human cognition and the V\&L model's capability, we propose a cross-modal
image-text retrieval framework based on ``object-aware query perturbation.''
The proposed method generates a key feature subspace of the detected objects
and perturbs the corresponding queries using this subspace to improve the
object awareness in the image. In our proposed method, object-aware cross-modal
image-text retrieval is possible while keeping the rich expressive power and
retrieval performance of existing V\&L models without additional fine-tuning.
Comprehensive experiments on four public datasets show that our method
outperforms conventional algorithms. Our code is publicly available at
\url{https://github.com/NEC-N-SOGI/query-perturbation}. |
---|---|
DOI: | 10.48550/arxiv.2407.12346 |