Semantic scene upgrades for trajectory prediction
Understanding pedestrian motion is critical for many real-world applications, e.g., autonomous driving and social robot navigation. It is a challenging problem since autonomous agents require complete understanding of its surroundings including complex spatial, social and scene dependencies. In traj...
Gespeichert in:
Veröffentlicht in: | Machine vision and applications 2023-03, Vol.34 (2), p.23-23, Article 23 |
---|---|
Hauptverfasser: | , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Understanding pedestrian motion is critical for many real-world applications, e.g., autonomous driving and social robot navigation. It is a challenging problem since autonomous agents require complete understanding of its surroundings including complex spatial, social and scene dependencies. In trajectory prediction research, spatial and social interactions are widely studied while scene understanding has received less attention. In this paper, we study the effectiveness of different encoding mechanisms to understand the influence of the scene on pedestrian trajectories. We leverage a recurrent Variational Autoencoder to encode pedestrian motion history, its social interaction with other pedestrians and semantic scene information. We then evaluate the performance on various public datasets, such as ETH–UCY, Stanford Drone and Grand Central Station. Experimental results show that utilizing a fully segmented map, for explicit scene semantics, out performs other variants of scene representations (semantic and CNN embedding) for trajectory prediction tasks. |
---|---|
ISSN: | 0932-8092 1432-1769 |
DOI: | 10.1007/s00138-022-01357-z |