VIMI: Grounding Video Generation through Multi-modal Instruction
Existing text-to-video diffusion models rely solely on text-only encoders for their pretraining. This limitation stems from the absence of large-scale multimodal prompt video datasets, resulting in a lack of visual grounding and restricting their versatility and application in multimodal integration...
Gespeichert in:
Hauptverfasser: | , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Existing text-to-video diffusion models rely solely on text-only encoders for
their pretraining. This limitation stems from the absence of large-scale
multimodal prompt video datasets, resulting in a lack of visual grounding and
restricting their versatility and application in multimodal integration. To
address this, we construct a large-scale multimodal prompt dataset by employing
retrieval methods to pair in-context examples with the given text prompts and
then utilize a two-stage training strategy to enable diverse video generation
tasks within the same model. In the first stage, we propose a multimodal
conditional video generation framework for pretraining on these augmented
datasets, establishing a foundational model for grounded video generation.
Secondly, we finetune the model from the first stage on three video generation
tasks, incorporating multi-modal instructions. This process further refines the
model's ability to handle diverse inputs and tasks, ensuring seamless
integration of multi-modal information. After this two-stage train-ing process,
VIMI demonstrates multimodal understanding capabilities, producing contextually
rich and personalized videos grounded in the provided inputs, as shown in
Figure 1. Compared to previous visual grounded video generation methods, VIMI
can synthesize consistent and temporally coherent videos with large motion
while retaining the semantic control. Lastly, VIMI also achieves
state-of-the-art text-to-video generation results on UCF101 benchmark. |
---|---|
DOI: | 10.48550/arxiv.2407.06304 |