Video-Language Alignment via Spatio-Temporal Graph Transformer
Video-language alignment is a crucial multi-modal task that benefits various downstream applications, e.g., video-text retrieval and video question answering. Existing methods either utilize multi-modal information in video-text pairs or apply global and local alignment techniques to promote alignme...
Gespeichert in:
Hauptverfasser: | , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Video-language alignment is a crucial multi-modal task that benefits various
downstream applications, e.g., video-text retrieval and video question
answering. Existing methods either utilize multi-modal information in
video-text pairs or apply global and local alignment techniques to promote
alignment precision. However, these methods often fail to fully explore the
spatio-temporal relationships among vision tokens within video and across
different video-text pairs. In this paper, we propose a novel Spatio-Temporal
Graph Transformer module to uniformly learn spatial and temporal contexts for
video-language alignment pre-training (dubbed STGT). Specifically, our STGT
combines spatio-temporal graph structure information with attention in
transformer block, effectively utilizing the spatio-temporal contexts. In this
way, we can model the relationships between vision tokens, promoting video-text
alignment precision for benefiting downstream tasks. In addition, we propose a
self-similarity alignment loss to explore the inherent self-similarity in the
video and text. With the initial optimization achieved by contrastive learning,
it can further promote the alignment accuracy between video and text.
Experimental results on challenging downstream tasks, including video-text
retrieval and video question answering, verify the superior performance of our
method. |
---|---|
DOI: | 10.48550/arxiv.2407.11677 |