Accurate Visual Localization for Automotive Applications
Accurate vehicle localization is a crucial step towards building effective Vehicle-to-Vehicle networks and automotive applications. Yet standard grade GPS data, such as that provided by mobile phones, is often noisy and exhibits significant localization errors in many urban areas. Approaches for acc...
Gespeichert in:
Hauptverfasser: | , , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Accurate vehicle localization is a crucial step towards building effective
Vehicle-to-Vehicle networks and automotive applications. Yet standard grade GPS
data, such as that provided by mobile phones, is often noisy and exhibits
significant localization errors in many urban areas. Approaches for accurate
localization from imagery often rely on structure-based techniques, and thus
are limited in scale and are expensive to compute. In this paper, we present a
scalable visual localization approach geared for real-time performance. We
propose a hybrid coarse-to-fine approach that leverages visual and GPS location
cues. Our solution uses a self-supervised approach to learn a compact road
image representation. This representation enables efficient visual retrieval
and provides coarse localization cues, which are fused with vehicle ego-motion
to obtain high accuracy location estimates. As a benchmark to evaluate the
performance of our visual localization approach, we introduce a new large-scale
driving dataset based on video and GPS data obtained from a large-scale network
of connected dash-cams. Our experiments confirm that our approach is highly
effective in challenging urban environments, reducing localization error by an
order of magnitude. |
---|---|
DOI: | 10.48550/arxiv.1905.03706 |