GOOD: General Optimization-based Fusion for 3D Object Detection via LiDAR-Camera Object Candidates
3D object detection serves as the core basis of the perception tasks in autonomous driving. Recent years have seen the rapid progress of multi-modal fusion strategies for more robust and accurate 3D object detection. However, current researches for robust fusion are all learning-based frameworks, wh...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | 3D object detection serves as the core basis of the perception tasks in
autonomous driving. Recent years have seen the rapid progress of multi-modal
fusion strategies for more robust and accurate 3D object detection. However,
current researches for robust fusion are all learning-based frameworks, which
demand a large amount of training data and are inconvenient to implement in new
scenes. In this paper, we propose GOOD, a general optimization-based fusion
framework that can achieve satisfying detection without training additional
models and is available for any combinations of 2D and 3D detectors to improve
the accuracy and robustness of 3D detection. First we apply the mutual-sided
nearest-neighbor probability model to achieve the 3D-2D data association. Then
we design an optimization pipeline that can optimize different kinds of
instances separately based on the matching result. Apart from this, the 3D MOT
method is also introduced to enhance the performance aided by previous frames.
To the best of our knowledge, this is the first optimization-based late fusion
framework for multi-modal 3D object detection which can be served as a baseline
for subsequent research. Experiments on both nuScenes and KITTI datasets are
carried out and the results show that GOOD outperforms by 9.1\% on mAP score
compared with PointPillars and achieves competitive results with the
learning-based late fusion CLOCs. |
---|---|
DOI: | 10.48550/arxiv.2303.09800 |