CVPR 2023 Text Guided Video Editing Competition
Humans watch more than a billion hours of video per day. Most of this video was edited manually, which is a tedious process. However, AI-enabled video-generation and video-editing is on the rise. Building on text-to-image models like Stable Diffusion and Imagen, generative AI has improved dramatical...
Gespeichert in:
Hauptverfasser: | , , , , , , , , , , , , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Humans watch more than a billion hours of video per day. Most of this video
was edited manually, which is a tedious process. However, AI-enabled
video-generation and video-editing is on the rise. Building on text-to-image
models like Stable Diffusion and Imagen, generative AI has improved
dramatically on video tasks. But it's hard to evaluate progress in these video
tasks because there is no standard benchmark. So, we propose a new dataset for
text-guided video editing (TGVE), and we run a competition at CVPR to evaluate
models on our TGVE dataset. In this paper we present a retrospective on the
competition and describe the winning method. The competition dataset is
available at https://sites.google.com/view/loveucvpr23/track4. |
---|---|
DOI: | 10.48550/arxiv.2310.16003 |