DualAD: Disentangling the Dynamic and Static World for End-to-End Driving
State-of-the-art approaches for autonomous driving integrate multiple sub-tasks of the overall driving task into a single pipeline that can be trained in an end-to-end fashion by passing latent representations between the different modules. In contrast to previous approaches that rely on a unified g...
Gespeichert in:
Hauptverfasser: | , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | State-of-the-art approaches for autonomous driving integrate multiple
sub-tasks of the overall driving task into a single pipeline that can be
trained in an end-to-end fashion by passing latent representations between the
different modules. In contrast to previous approaches that rely on a unified
grid to represent the belief state of the scene, we propose dedicated
representations to disentangle dynamic agents and static scene elements. This
allows us to explicitly compensate for the effect of both ego and object motion
between consecutive time steps and to flexibly propagate the belief state
through time. Furthermore, dynamic objects can not only attend to the input
camera images, but also directly benefit from the inferred static scene
structure via a novel dynamic-static cross-attention. Extensive experiments on
the challenging nuScenes benchmark demonstrate the benefits of the proposed
dual-stream design, especially for modelling highly dynamic agents in the
scene, and highlight the improved temporal consistency of our approach. Our
method titled DualAD not only outperforms independently trained single-task
networks, but also improves over previous state-of-the-art end-to-end models by
a large margin on all tasks along the functional chain of driving. |
---|---|
DOI: | 10.48550/arxiv.2406.06264 |