DepthCrafter: Generating Consistent Long Depth Sequences for Open-world Videos
Estimating video depth in open-world scenarios is challenging due to the diversity of videos in appearance, content motion, camera movement, and length. We present DepthCrafter, an innovative method for generating temporally consistent long depth sequences with intricate details for open-world video...
Gespeichert in:
Hauptverfasser: | , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Estimating video depth in open-world scenarios is challenging due to the
diversity of videos in appearance, content motion, camera movement, and length.
We present DepthCrafter, an innovative method for generating temporally
consistent long depth sequences with intricate details for open-world videos,
without requiring any supplementary information such as camera poses or optical
flow. The generalization ability to open-world videos is achieved by training
the video-to-depth model from a pre-trained image-to-video diffusion model,
through our meticulously designed three-stage training strategy. Our training
approach enables the model to generate depth sequences with variable lengths at
one time, up to 110 frames, and harvest both precise depth details and rich
content diversity from realistic and synthetic datasets. We also propose an
inference strategy that can process extremely long videos through segment-wise
estimation and seamless stitching. Comprehensive evaluations on multiple
datasets reveal that DepthCrafter achieves state-of-the-art performance in
open-world video depth estimation under zero-shot settings. Furthermore,
DepthCrafter facilitates various downstream applications, including depth-based
visual effects and conditional video generation. |
---|---|
DOI: | 10.48550/arxiv.2409.02095 |