Addressing Data Misalignment in Image-LiDAR Fusion on Point Cloud Segmentation
With the advent of advanced multi-sensor fusion models, there has been a notable enhancement in the performance of perception tasks within in terms of autonomous driving. Despite these advancements, the challenges persist, particularly in the fusion of data from cameras and LiDAR sensors. A critial...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | With the advent of advanced multi-sensor fusion models, there has been a
notable enhancement in the performance of perception tasks within in terms of
autonomous driving. Despite these advancements, the challenges persist,
particularly in the fusion of data from cameras and LiDAR sensors. A critial
concern is the accurate alignment of data from these disparate sensors. Our
observations indicate that the projected positions of LiDAR points often
misalign on the corresponding image. Furthermore, fusion models appear to
struggle in accurately segmenting these misaligned points. In this paper, we
would like to address this problem carefully, with a specific focus on the
nuScenes dataset and the SOTA of fusion models 2DPASS, and providing the
possible solutions or potential improvements. |
---|---|
DOI: | 10.48550/arxiv.2309.14932 |