Movie101v2: Improved Movie Narration Benchmark
Automatic movie narration aims to generate video-aligned plot descriptions to assist visually impaired audiences. Unlike standard video captioning, it involves not only describing key visual details but also inferring plots that unfold across multiple movie shots, presenting distinct and complex cha...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Automatic movie narration aims to generate video-aligned plot descriptions to
assist visually impaired audiences. Unlike standard video captioning, it
involves not only describing key visual details but also inferring plots that
unfold across multiple movie shots, presenting distinct and complex challenges.
To advance this field, we introduce Movie101v2, a large-scale, bilingual
dataset with enhanced data quality specifically designed for movie narration.
Revisiting the task, we propose breaking down the ultimate goal of automatic
movie narration into three progressive stages, offering a clear roadmap with
corresponding evaluation metrics. Based on our new benchmark, we baseline a
range of large vision-language models, including GPT-4V, and conduct an
in-depth analysis of the challenges in narration generation. Our findings
highlight that achieving applicable movie narration generation is a fascinating
goal that requires significant research. |
---|---|
DOI: | 10.48550/arxiv.2404.13370 |