VisionTrap: Vision-Augmented Trajectory Prediction Guided by Textual Descriptions
Predicting future trajectories for other road agents is an essential task for autonomous vehicles. Established trajectory prediction methods primarily use agent tracks generated by a detection and tracking system and HD map as inputs. In this work, we propose a novel method that also incorporates vi...
Gespeichert in:
Hauptverfasser: | , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Predicting future trajectories for other road agents is an essential task for
autonomous vehicles. Established trajectory prediction methods primarily use
agent tracks generated by a detection and tracking system and HD map as inputs.
In this work, we propose a novel method that also incorporates visual input
from surround-view cameras, allowing the model to utilize visual cues such as
human gazes and gestures, road conditions, vehicle turn signals, etc, which are
typically hidden from the model in prior methods. Furthermore, we use textual
descriptions generated by a Vision-Language Model (VLM) and refined by a Large
Language Model (LLM) as supervision during training to guide the model on what
to learn from the input data. Despite using these extra inputs, our method
achieves a latency of 53 ms, making it feasible for real-time processing, which
is significantly faster than that of previous single-agent prediction methods
with similar performance. Our experiments show that both the visual inputs and
the textual descriptions contribute to improvements in trajectory prediction
performance, and our qualitative analysis highlights how the model is able to
exploit these additional inputs. Lastly, in this work we create and release the
nuScenes-Text dataset, which augments the established nuScenes dataset with
rich textual annotations for every scene, demonstrating the positive impact of
utilizing VLM on trajectory prediction. Our project page is at
https://moonseokha.github.io/VisionTrap/ |
---|---|
DOI: | 10.48550/arxiv.2407.12345 |