DiVE: DiT-based Video Generation with Enhanced Control
Generating high-fidelity, temporally consistent videos in autonomous driving scenarios faces a significant challenge, e.g. problematic maneuvers in corner cases. Despite recent video generation works are proposed to tackcle the mentioned problem, i.e. models built on top of Diffusion Transformers (D...
Gespeichert in:
Hauptverfasser: | , , , , , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Generating high-fidelity, temporally consistent videos in autonomous driving
scenarios faces a significant challenge, e.g. problematic maneuvers in corner
cases. Despite recent video generation works are proposed to tackcle the
mentioned problem, i.e. models built on top of Diffusion Transformers (DiT),
works are still missing which are targeted on exploring the potential for
multi-view videos generation scenarios. Noticeably, we propose the first
DiT-based framework specifically designed for generating temporally and
multi-view consistent videos which precisely match the given bird's-eye view
layouts control. Specifically, the proposed framework leverages a
parameter-free spatial view-inflated attention mechanism to guarantee the
cross-view consistency, where joint cross-attention modules and
ControlNet-Transformer are integrated to further improve the precision of
control. To demonstrate our advantages, we extensively investigate the
qualitative comparisons on nuScenes dataset, particularly in some most
challenging corner cases. In summary, the effectiveness of our proposed method
in producing long, controllable, and highly consistent videos under difficult
conditions is proven to be effective. |
---|---|
DOI: | 10.48550/arxiv.2409.01595 |