SceneMotion: From Agent-Centric Embeddings to Scene-Wide Forecasts
Self-driving vehicles rely on multimodal motion forecasts to effectively interact with their environment and plan safe maneuvers. We introduce SceneMotion, an attention-based model for forecasting scene-wide motion modes of multiple traffic agents. Our model transforms local agent-centric embeddings...
Gespeichert in:
Hauptverfasser: | , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Self-driving vehicles rely on multimodal motion forecasts to effectively
interact with their environment and plan safe maneuvers. We introduce
SceneMotion, an attention-based model for forecasting scene-wide motion modes
of multiple traffic agents. Our model transforms local agent-centric embeddings
into scene-wide forecasts using a novel latent context module. This module
learns a scene-wide latent space from multiple agent-centric embeddings,
enabling joint forecasting and interaction modeling. The competitive
performance in the Waymo Open Interaction Prediction Challenge demonstrates the
effectiveness of our approach. Moreover, we cluster future waypoints in time
and space to quantify the interaction between agents. We merge all modes and
analyze each mode independently to determine which clusters are resolved
through interaction or result in conflict. Our implementation is available at:
https://github.com/kit-mrt/future-motion |
---|---|
DOI: | 10.48550/arxiv.2408.01537 |