Steering Prediction via a Multi-Sensor System for Autonomous Racing
Autonomous racing has rapidly gained research attention. Traditionally, racing cars rely on 2D LiDAR as their primary visual system. In this work, we explore the integration of an event camera with the existing system to provide enhanced temporal information. Our goal is to fuse the 2D LiDAR data wi...
Gespeichert in:
Hauptverfasser: | , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Autonomous racing has rapidly gained research attention. Traditionally,
racing cars rely on 2D LiDAR as their primary visual system. In this work, we
explore the integration of an event camera with the existing system to provide
enhanced temporal information. Our goal is to fuse the 2D LiDAR data with event
data in an end-to-end learning framework for steering prediction, which is
crucial for autonomous racing. To the best of our knowledge, this is the first
study addressing this challenging research topic. We start by creating a
multisensor dataset specifically for steering prediction. Using this dataset,
we establish a benchmark by evaluating various SOTA fusion methods. Our
observations reveal that existing methods often incur substantial computational
costs. To address this, we apply low-rank techniques to propose a novel,
efficient, and effective fusion design. We introduce a new fusion learning
policy to guide the fusion process, enhancing robustness against misalignment.
Our fusion architecture provides better steering prediction than LiDAR alone,
significantly reducing the RMSE from 7.72 to 1.28. Compared to the second-best
fusion method, our work represents only 11% of the learnable parameters while
achieving better accuracy. The source code, dataset, and benchmark will be
released to promote future research. |
---|---|
DOI: | 10.48550/arxiv.2409.19356 |