RedMotion: Motion Prediction via Redundancy Reduction
We introduce RedMotion, a transformer model for motion prediction in self-driving vehicles that learns environment representations via redundancy reduction. Our first type of redundancy reduction is induced by an internal transformer decoder and reduces a variable-sized set of local road environment...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | We introduce RedMotion, a transformer model for motion prediction in
self-driving vehicles that learns environment representations via redundancy
reduction. Our first type of redundancy reduction is induced by an internal
transformer decoder and reduces a variable-sized set of local road environment
tokens, representing road graphs and agent data, to a fixed-sized global
embedding. The second type of redundancy reduction is obtained by
self-supervised learning and applies the redundancy reduction principle to
embeddings generated from augmented views of road environments. Our experiments
reveal that our representation learning approach outperforms PreTraM, Traj-MAE,
and GraphDINO in a semi-supervised setting. Moreover, RedMotion achieves
competitive results compared to HPTR or MTR++ in the Waymo Motion Prediction
Challenge. Our open-source implementation is available at:
https://github.com/kit-mrt/future-motion |
---|---|
DOI: | 10.48550/arxiv.2306.10840 |