Divide-and-Conquer for Lane-Aware Diverse Trajectory Prediction
Trajectory prediction is a safety-critical tool for autonomous vehicles to plan and execute actions. Our work addresses two key challenges in trajectory prediction, learning multimodal outputs, and better predictions by imposing constraints using driving knowledge. Recent methods have achieved stron...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Trajectory prediction is a safety-critical tool for autonomous vehicles to
plan and execute actions. Our work addresses two key challenges in trajectory
prediction, learning multimodal outputs, and better predictions by imposing
constraints using driving knowledge. Recent methods have achieved strong
performances using Multi-Choice Learning objectives like winner-takes-all (WTA)
or best-of-many. But the impact of those methods in learning diverse hypotheses
is under-studied as such objectives highly depend on their initialization for
diversity. As our first contribution, we propose a novel Divide-And-Conquer
(DAC) approach that acts as a better initialization technique to WTA objective,
resulting in diverse outputs without any spurious modes. Our second
contribution is a novel trajectory prediction framework called ALAN that uses
existing lane centerlines as anchors to provide trajectories constrained to the
input lanes. Our framework provides multi-agent trajectory outputs in a forward
pass by capturing interactions through hypercolumn descriptors and
incorporating scene information in the form of rasterized images and per-agent
lane anchors. Experiments on synthetic and real data show that the proposed DAC
captures the data distribution better compare to other WTA family of
objectives. Further, we show that our ALAN approach provides on par or better
performance with SOTA methods evaluated on Nuscenes urban driving benchmark. |
---|---|
DOI: | 10.48550/arxiv.2104.08277 |