RGB-LiDAR fusion for accurate 2D and 3D object detection
Effective detection of road objects in diverse environmental conditions is a critical requirement for autonomous driving systems. Multi-modal sensor fusion is a promising approach for improving perception, as it enables the combination of information from multiple sensor streams in order to optimize...
Gespeichert in:
Veröffentlicht in: | Machine vision and applications 2023-09, Vol.34 (5), p.86, Article 86 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Effective detection of road objects in diverse environmental conditions is a critical requirement for autonomous driving systems. Multi-modal sensor fusion is a promising approach for improving perception, as it enables the combination of information from multiple sensor streams in order to optimize the integration of their respective data. Fusion operators are employed within fully convolutional architectures to combine features derived from different modalities. In this research, we present a framework that utilizes early fusion mechanisms to train and evaluate 2D object detection algorithms. Our evaluation shows that sensor fusion outperforms RGB-only detection methods, yielding a boost of +15.07% for car detection, +10.81% for pedestrian detection, and +19.86% for cyclist detection. In our comparative study, we evaluated three arithmetic-based fusion operators and two learnable fusion operators. Furthermore, we conducted a performance comparison between early- and mid-level fusion techniques and investigated the effects of early fusion on state-of-the-art 3D object detectors. Lastly, we provide a comprehensive analysis of the computational complexity of our proposed framework, along with an ablation study. |
---|---|
ISSN: | 0932-8092 1432-1769 |
DOI: | 10.1007/s00138-023-01435-w |