MuLTI: Efficient Video-and-Language Understanding with Text-Guided MultiWay-Sampler and Multiple Choice Modeling
Video-and-language understanding has a variety of applications in the industry, such as video question answering, text-video retrieval, and multi-label classification. Existing video-and-language understanding methods generally adopt heavy multi-modal encoders and feature fusion modules, which consu...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Video-and-language understanding has a variety of applications in the
industry, such as video question answering, text-video retrieval, and
multi-label classification. Existing video-and-language understanding methods
generally adopt heavy multi-modal encoders and feature fusion modules, which
consume high computational costs. Specially, they have difficulty dealing with
dense video frames or long text prevalent in industrial applications. This
paper proposes MuLTI, a highly accurate and efficient video-and-language
understanding model that achieves efficient and effective feature fusion and
rapid adaptation to downstream tasks. Specifically, we design a Text-Guided
MultiWay-Sampler based on adapt-pooling residual mapping and self-attention
modules to sample long sequences and fuse multi-modal features, which reduces
the computational costs and addresses performance degradation caused by
previous samplers. Therefore, MuLTI can handle longer sequences with limited
computational costs. Then, to further enhance the model's performance and fill
in the lack of pretraining tasks in the video question answering, we propose a
new pretraining task named Multiple Choice Modeling. This task bridges the gap
between pretraining and downstream tasks and improves the model's ability to
align video and text features. Benefiting from the efficient feature fusion
module and the new pretraining task, MuLTI achieves state-of-the-art
performance on multiple datasets. Implementation and pretrained models will be
released. |
---|---|
DOI: | 10.48550/arxiv.2303.05707 |