Scaling Multi-Camera 3D Object Detection through Weak-to-Strong Eliciting
The emergence of Multi-Camera 3D Object Detection (MC3D-Det), facilitated by bird's-eye view (BEV) representation, signifies a notable progression in 3D object detection. Scaling MC3D-Det training effectively accommodates varied camera parameters and urban landscapes, paving the way for the MC3...
Gespeichert in:
Hauptverfasser: | , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | The emergence of Multi-Camera 3D Object Detection (MC3D-Det), facilitated by
bird's-eye view (BEV) representation, signifies a notable progression in 3D
object detection. Scaling MC3D-Det training effectively accommodates varied
camera parameters and urban landscapes, paving the way for the MC3D-Det
foundation model. However, the multi-view fusion stage of the MC3D-Det method
relies on the ill-posed monocular perception during training rather than
surround refinement ability, leading to what we term "surround refinement
degradation". To this end, our study presents a weak-to-strong eliciting
framework aimed at enhancing surround refinement while maintaining robust
monocular perception. Specifically, our framework employs weakly tuned experts
trained on distinct subsets, and each is inherently biased toward specific
camera configurations and scenarios. These biased experts can learn the
perception of monocular degeneration, which can help the multi-view fusion
stage to enhance surround refinement abilities. Moreover, a composite
distillation strategy is proposed to integrate the universal knowledge of 2D
foundation models and task-specific information. Finally, for MC3D-Det joint
training, the elaborate dataset merge strategy is designed to solve the problem
of inconsistent camera numbers and camera parameters. We set up a multiple
dataset joint training benchmark for MC3D-Det and adequately evaluated existing
methods. Further, we demonstrate the proposed framework brings a generalized
and significant boost over multiple baselines. Our code is at
\url{https://github.com/EnVision-Research/Scale-BEV}. |
---|---|
DOI: | 10.48550/arxiv.2404.06700 |