Boosting Monocular 3D Object Detection with Object-Centric Auxiliary Depth Supervision
Recent advances in monocular 3D detection leverage a depth estimation network explicitly as an intermediate stage of the 3D detection network. Depth map approaches yield more accurate depth to objects than other methods thanks to the depth estimation network trained on a large-scale dataset. However...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Recent advances in monocular 3D detection leverage a depth estimation network
explicitly as an intermediate stage of the 3D detection network. Depth map
approaches yield more accurate depth to objects than other methods thanks to
the depth estimation network trained on a large-scale dataset. However, depth
map approaches can be limited by the accuracy of the depth map, and
sequentially using two separated networks for depth estimation and 3D detection
significantly increases computation cost and inference time. In this work, we
propose a method to boost the RGB image-based 3D detector by jointly training
the detection network with a depth prediction loss analogous to the depth
estimation task. In this way, our 3D detection network can be supervised by
more depth supervision from raw LiDAR points, which does not require any human
annotation cost, to estimate accurate depth without explicitly predicting the
depth map. Our novel object-centric depth prediction loss focuses on depth
around foreground objects, which is important for 3D object detection, to
leverage pixel-wise depth supervision in an object-centric manner. Our depth
regression model is further trained to predict the uncertainty of depth to
represent the 3D confidence of objects. To effectively train the 3D detector
with raw LiDAR points and to enable end-to-end training, we revisit the
regression target of 3D objects and design a network architecture. Extensive
experiments on KITTI and nuScenes benchmarks show that our method can
significantly boost the monocular image-based 3D detector to outperform depth
map approaches while maintaining the real-time inference speed. |
---|---|
DOI: | 10.48550/arxiv.2210.16574 |