Uncertainty-Aware Pedestrian Trajectory Prediction via Distributional Diffusion
Tremendous efforts have been put forth on predicting pedestrian trajectory with generative models to accommodate uncertainty and multi-modality in human behaviors. An individual's inherent uncertainty, e.g., change of destination, can be masked by complex patterns resulting from the movements o...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Tremendous efforts have been put forth on predicting pedestrian trajectory
with generative models to accommodate uncertainty and multi-modality in human
behaviors. An individual's inherent uncertainty, e.g., change of destination,
can be masked by complex patterns resulting from the movements of interacting
pedestrians. However, latent variable-based generative models often entangle
such uncertainty with complexity, leading to limited either latent expressivity
or predictive diversity. In this work, we propose to separately model these two
factors by implicitly deriving a flexible latent representation to capture
intricate pedestrian movements, while integrating predictive uncertainty of
individuals with explicit bivariate Gaussian mixture densities over their
future locations. More specifically, we present a model-agnostic
uncertainty-aware pedestrian trajectory prediction framework, parameterizing
sufficient statistics for the mixture of Gaussians that jointly comprise the
multi-modal trajectories. We further estimate these parameters of interest by
approximating a denoising process that progressively recovers pedestrian
movements from noise. Unlike previous studies, we translate the predictive
stochasticity to explicit distributions, allowing it to readily generate
plausible future trajectories indicating individuals' self-uncertainty.
Moreover, our framework is compatible with different neural net architectures.
We empirically show the performance gains over state-of-the-art even with
lighter backbones, across most scenes on two public benchmarks. |
---|---|
DOI: | 10.48550/arxiv.2303.08367 |