Learning Video Context as Interleaved Multimodal Sequences
Narrative videos, such as movies, pose significant challenges in video understanding due to their rich contexts (characters, dialogues, storylines) and diverse demands (identify who, relationship, and reason). In this paper, we introduce MovieSeq, a multimodal language model developed to address the...
Gespeichert in:
Hauptverfasser: | , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Narrative videos, such as movies, pose significant challenges in video
understanding due to their rich contexts (characters, dialogues, storylines)
and diverse demands (identify who, relationship, and reason). In this paper, we
introduce MovieSeq, a multimodal language model developed to address the wide
range of challenges in understanding video contexts. Our core idea is to
represent videos as interleaved multimodal sequences (including images, plots,
videos, and subtitles), either by linking external knowledge databases or using
offline models (such as whisper for subtitles). Through instruction-tuning,
this approach empowers the language model to interact with videos using
interleaved multimodal instructions. For example, instead of solely relying on
video as input, we jointly provide character photos alongside their names and
dialogues, allowing the model to associate these elements and generate more
comprehensive responses. To demonstrate its effectiveness, we validate
MovieSeq's performance on six datasets (LVU, MAD, Movienet, CMD, TVC, MovieQA)
across five settings (video classification, audio description, video-text
retrieval, video captioning, and video question-answering). The code will be
public at https://github.com/showlab/MovieSeq. |
---|---|
DOI: | 10.48550/arxiv.2407.21757 |