Multimodal Future Localization and Emergence Prediction for Objects in Egocentric View with a Reachability Prior
In this paper, we investigate the problem of anticipating future dynamics, particularly the future location of other vehicles and pedestrians, in the view of a moving vehicle. We approach two fundamental challenges: (1) the partial visibility due to the egocentric view with a single RGB camera and c...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | In this paper, we investigate the problem of anticipating future dynamics,
particularly the future location of other vehicles and pedestrians, in the view
of a moving vehicle. We approach two fundamental challenges: (1) the partial
visibility due to the egocentric view with a single RGB camera and considerable
field-of-view change due to the egomotion of the vehicle; (2) the multimodality
of the distribution of future states. In contrast to many previous works, we do
not assume structural knowledge from maps. We rather estimate a reachability
prior for certain classes of objects from the semantic map of the present image
and propagate it into the future using the planned egomotion. Experiments show
that the reachability prior combined with multi-hypotheses learning improves
multimodal prediction of the future location of tracked objects and, for the
first time, the emergence of new objects. We also demonstrate promising
zero-shot transfer to unseen datasets. Source code is available at
$\href{https://github.com/lmb-freiburg/FLN-EPN-RPN}{\text{this https URL.}}$ |
---|---|
DOI: | 10.48550/arxiv.2006.04700 |