Modality-Balanced Embedding for Video Retrieval
SIGIR, 2022 Video search has become the main routine for users to discover videos relevant to a text query on large short-video sharing platforms. During training a query-video bi-encoder model using online search logs, we identify a modality bias phenomenon that the video encoder almost entirely re...
Gespeichert in:
Hauptverfasser: | , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | SIGIR, 2022 Video search has become the main routine for users to discover videos
relevant to a text query on large short-video sharing platforms. During
training a query-video bi-encoder model using online search logs, we identify a
modality bias phenomenon that the video encoder almost entirely relies on text
matching, neglecting other modalities of the videos such as vision, audio. This
modality imbalanceresults from a) modality gap: the relevance between a query
and a video text is much easier to learn as the query is also a piece of text,
with the same modality as the video text; b) data bias: most training samples
can be solved solely by text matching. Here we share our practices to improve
the first retrieval stage including our solution for the modality imbalance
issue. We propose MBVR (short for Modality Balanced Video Retrieval) with two
key components: manually generated modality-shuffled (MS) samples and a dynamic
margin (DM) based on visual relevance. They can encourage the video encoder to
pay balanced attentions to each modality. Through extensive experiments on a
real world dataset, we show empirically that our method is both effective and
efficient in solving modality bias problem. We have also deployed our MBVR in a
large video platform and observed statistically significant boost over a highly
optimized baseline in an A/B test and manual GSB evaluations. |
---|---|
DOI: | 10.48550/arxiv.2204.08182 |