Video-Text Retrieval by Supervised Sparse Multi-Grained Learning
While recent progress in video-text retrieval has been advanced by the exploration of better representation learning, in this paper, we present a novel multi-grained sparse learning framework, S3MA, to learn an aligned sparse space shared between the video and the text for video-text retrieval. The...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | While recent progress in video-text retrieval has been advanced by the
exploration of better representation learning, in this paper, we present a
novel multi-grained sparse learning framework, S3MA, to learn an aligned sparse
space shared between the video and the text for video-text retrieval. The
shared sparse space is initialized with a finite number of sparse concepts,
each of which refers to a number of words. With the text data at hand, we learn
and update the shared sparse space in a supervised manner using the proposed
similarity and alignment losses. Moreover, to enable multi-grained alignment,
we incorporate frame representations for better modeling the video modality and
calculating fine-grained and coarse-grained similarities. Benefiting from the
learned shared sparse space and multi-grained similarities, extensive
experiments on several video-text retrieval benchmarks demonstrate the
superiority of S3MA over existing methods. Our code is available at
https://github.com/yimuwangcs/Better_Cross_Modal_Retrieval. |
---|---|
DOI: | 10.48550/arxiv.2302.09473 |