Highly Detailed and Temporal Consistent Video Stylization via Synchronized Multi-Frame Diffusion
Text-guided video-to-video stylization transforms the visual appearance of a source video to a different appearance guided on textual prompts. Existing text-guided image diffusion models can be extended for stylized video synthesis. However, they struggle to generate videos with both highly detailed...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Text-guided video-to-video stylization transforms the visual appearance of a
source video to a different appearance guided on textual prompts. Existing
text-guided image diffusion models can be extended for stylized video
synthesis. However, they struggle to generate videos with both highly detailed
appearance and temporal consistency. In this paper, we propose a synchronized
multi-frame diffusion framework to maintain both the visual details and the
temporal consistency. Frames are denoised in a synchronous fashion, and more
importantly, information of different frames is shared since the beginning of
the denoising process. Such information sharing ensures that a consensus, in
terms of the overall structure and color distribution, among frames can be
reached in the early stage of the denoising process before it is too late. The
optical flow from the original video serves as the connection, and hence the
venue for information sharing, among frames. We demonstrate the effectiveness
of our method in generating high-quality and diverse results in extensive
experiments. Our method shows superior qualitative and quantitative results
compared to state-of-the-art video editing methods. |
---|---|
DOI: | 10.48550/arxiv.2311.14343 |