PathFusion: Path-consistent Lidar-Camera Deep Feature Fusion
Fusing 3D LiDAR features with 2D camera features is a promising technique for enhancing the accuracy of 3D detection, thanks to their complementary physical properties. While most of the existing methods focus on directly fusing camera features with raw LiDAR point clouds or shallow-level 3D feature...
Gespeichert in:
Hauptverfasser: | , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Fusing 3D LiDAR features with 2D camera features is a promising technique for
enhancing the accuracy of 3D detection, thanks to their complementary physical
properties. While most of the existing methods focus on directly fusing camera
features with raw LiDAR point clouds or shallow-level 3D features, it is
observed that directly combining 2D and 3D features in deeper layers actually
leads to a decrease in accuracy due to feature misalignment. The misalignment,
which stems from the aggregation of features learned from large receptive
fields, becomes increasingly more severe as we delve into deeper layers. In
this paper, we propose PathFusion as a solution to enable the alignment of
semantically coherent LiDAR-camera deep feature fusion. PathFusion introduces a
path consistency loss at multiple stages within the network, encouraging the 2D
backbone and its fusion path to transform 2D features in a way that aligns
semantically with the transformation of the 3D backbone. This ensures semantic
consistency between 2D and 3D features, even in deeper layers, and amplifies
the usage of the network's learning capacity. We apply PathFusion to improve a
prior-art fusion baseline, Focals Conv, and observe an improvement of over 1.6%
in mAP on the nuScenes test split consistently with and without testing-time
data augmentations, and moreover, PathFusion also improves KITTI
$\text{AP}_{\text{3D}}$ (R11) by about 0.6% on the moderate level. |
---|---|
DOI: | 10.48550/arxiv.2212.06244 |