Scaling Multi-Camera 3D Object Detection through Weak-to-Strong Eliciting

The emergence of Multi-Camera 3D Object Detection (MC3D-Det), facilitated by bird's-eye view (BEV) representation, signifies a notable progression in 3D object detection. Scaling MC3D-Det training effectively accommodates varied camera parameters and urban landscapes, paving the way for the MC3...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Lu, Hao, Tang, Jiaqi, Xu, Xinli, Cao, Xu, Zhang, Yunpeng, Wang, Guoqing, Du, Dalong, Chen, Hao, Chen, Yingcong
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Lu, Hao
Tang, Jiaqi
Xu, Xinli
Cao, Xu
Zhang, Yunpeng
Wang, Guoqing
Du, Dalong
Chen, Hao
Chen, Yingcong
description The emergence of Multi-Camera 3D Object Detection (MC3D-Det), facilitated by bird's-eye view (BEV) representation, signifies a notable progression in 3D object detection. Scaling MC3D-Det training effectively accommodates varied camera parameters and urban landscapes, paving the way for the MC3D-Det foundation model. However, the multi-view fusion stage of the MC3D-Det method relies on the ill-posed monocular perception during training rather than surround refinement ability, leading to what we term "surround refinement degradation". To this end, our study presents a weak-to-strong eliciting framework aimed at enhancing surround refinement while maintaining robust monocular perception. Specifically, our framework employs weakly tuned experts trained on distinct subsets, and each is inherently biased toward specific camera configurations and scenarios. These biased experts can learn the perception of monocular degeneration, which can help the multi-view fusion stage to enhance surround refinement abilities. Moreover, a composite distillation strategy is proposed to integrate the universal knowledge of 2D foundation models and task-specific information. Finally, for MC3D-Det joint training, the elaborate dataset merge strategy is designed to solve the problem of inconsistent camera numbers and camera parameters. We set up a multiple dataset joint training benchmark for MC3D-Det and adequately evaluated existing methods. Further, we demonstrate the proposed framework brings a generalized and significant boost over multiple baselines. Our code is at \url{https://github.com/EnVision-Research/Scale-BEV}.
doi_str_mv 10.48550/arxiv.2404.06700
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2404_06700</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2404_06700</sourcerecordid><originalsourceid>FETCH-LOGICAL-a670-c279e7f10b89e820e80a5f7f0ffdcfb4727dbf240cf02308efdd06b636f2216b3</originalsourceid><addsrcrecordid>eNotj81OAjEURrthYcAHcGVfoMOddqYtSzOgkmBYQOJy0nZ6oTIwphYjb09FV2f1_RxCHkooKl3XMDXxJ3wXvIKqAKkA7shy40wfTjv6du5TYI05-miomNO1_fAu0blPGWE40bSPw3m3p-_eHFga2CbFIecWfXAh5YYJGaHpv_z9P8dk-7zYNq9stX5ZNk8rZvIic1zNvMISrJ55zcFrMDUqBMTOoa0UV53F_NAhcAHaY9eBtFJI5LyUVozJ41_tzaX9jOFo4qX9dWpvTuIKfeBGtg</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Scaling Multi-Camera 3D Object Detection through Weak-to-Strong Eliciting</title><source>arXiv.org</source><creator>Lu, Hao ; Tang, Jiaqi ; Xu, Xinli ; Cao, Xu ; Zhang, Yunpeng ; Wang, Guoqing ; Du, Dalong ; Chen, Hao ; Chen, Yingcong</creator><creatorcontrib>Lu, Hao ; Tang, Jiaqi ; Xu, Xinli ; Cao, Xu ; Zhang, Yunpeng ; Wang, Guoqing ; Du, Dalong ; Chen, Hao ; Chen, Yingcong</creatorcontrib><description>The emergence of Multi-Camera 3D Object Detection (MC3D-Det), facilitated by bird's-eye view (BEV) representation, signifies a notable progression in 3D object detection. Scaling MC3D-Det training effectively accommodates varied camera parameters and urban landscapes, paving the way for the MC3D-Det foundation model. However, the multi-view fusion stage of the MC3D-Det method relies on the ill-posed monocular perception during training rather than surround refinement ability, leading to what we term "surround refinement degradation". To this end, our study presents a weak-to-strong eliciting framework aimed at enhancing surround refinement while maintaining robust monocular perception. Specifically, our framework employs weakly tuned experts trained on distinct subsets, and each is inherently biased toward specific camera configurations and scenarios. These biased experts can learn the perception of monocular degeneration, which can help the multi-view fusion stage to enhance surround refinement abilities. Moreover, a composite distillation strategy is proposed to integrate the universal knowledge of 2D foundation models and task-specific information. Finally, for MC3D-Det joint training, the elaborate dataset merge strategy is designed to solve the problem of inconsistent camera numbers and camera parameters. We set up a multiple dataset joint training benchmark for MC3D-Det and adequately evaluated existing methods. Further, we demonstrate the proposed framework brings a generalized and significant boost over multiple baselines. Our code is at \url{https://github.com/EnVision-Research/Scale-BEV}.</description><identifier>DOI: 10.48550/arxiv.2404.06700</identifier><language>eng</language><subject>Computer Science - Computer Vision and Pattern Recognition</subject><creationdate>2024-04</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2404.06700$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2404.06700$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Lu, Hao</creatorcontrib><creatorcontrib>Tang, Jiaqi</creatorcontrib><creatorcontrib>Xu, Xinli</creatorcontrib><creatorcontrib>Cao, Xu</creatorcontrib><creatorcontrib>Zhang, Yunpeng</creatorcontrib><creatorcontrib>Wang, Guoqing</creatorcontrib><creatorcontrib>Du, Dalong</creatorcontrib><creatorcontrib>Chen, Hao</creatorcontrib><creatorcontrib>Chen, Yingcong</creatorcontrib><title>Scaling Multi-Camera 3D Object Detection through Weak-to-Strong Eliciting</title><description>The emergence of Multi-Camera 3D Object Detection (MC3D-Det), facilitated by bird's-eye view (BEV) representation, signifies a notable progression in 3D object detection. Scaling MC3D-Det training effectively accommodates varied camera parameters and urban landscapes, paving the way for the MC3D-Det foundation model. However, the multi-view fusion stage of the MC3D-Det method relies on the ill-posed monocular perception during training rather than surround refinement ability, leading to what we term "surround refinement degradation". To this end, our study presents a weak-to-strong eliciting framework aimed at enhancing surround refinement while maintaining robust monocular perception. Specifically, our framework employs weakly tuned experts trained on distinct subsets, and each is inherently biased toward specific camera configurations and scenarios. These biased experts can learn the perception of monocular degeneration, which can help the multi-view fusion stage to enhance surround refinement abilities. Moreover, a composite distillation strategy is proposed to integrate the universal knowledge of 2D foundation models and task-specific information. Finally, for MC3D-Det joint training, the elaborate dataset merge strategy is designed to solve the problem of inconsistent camera numbers and camera parameters. We set up a multiple dataset joint training benchmark for MC3D-Det and adequately evaluated existing methods. Further, we demonstrate the proposed framework brings a generalized and significant boost over multiple baselines. Our code is at \url{https://github.com/EnVision-Research/Scale-BEV}.</description><subject>Computer Science - Computer Vision and Pattern Recognition</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj81OAjEURrthYcAHcGVfoMOddqYtSzOgkmBYQOJy0nZ6oTIwphYjb09FV2f1_RxCHkooKl3XMDXxJ3wXvIKqAKkA7shy40wfTjv6du5TYI05-miomNO1_fAu0blPGWE40bSPw3m3p-_eHFga2CbFIecWfXAh5YYJGaHpv_z9P8dk-7zYNq9stX5ZNk8rZvIic1zNvMISrJ55zcFrMDUqBMTOoa0UV53F_NAhcAHaY9eBtFJI5LyUVozJ41_tzaX9jOFo4qX9dWpvTuIKfeBGtg</recordid><startdate>20240409</startdate><enddate>20240409</enddate><creator>Lu, Hao</creator><creator>Tang, Jiaqi</creator><creator>Xu, Xinli</creator><creator>Cao, Xu</creator><creator>Zhang, Yunpeng</creator><creator>Wang, Guoqing</creator><creator>Du, Dalong</creator><creator>Chen, Hao</creator><creator>Chen, Yingcong</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20240409</creationdate><title>Scaling Multi-Camera 3D Object Detection through Weak-to-Strong Eliciting</title><author>Lu, Hao ; Tang, Jiaqi ; Xu, Xinli ; Cao, Xu ; Zhang, Yunpeng ; Wang, Guoqing ; Du, Dalong ; Chen, Hao ; Chen, Yingcong</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a670-c279e7f10b89e820e80a5f7f0ffdcfb4727dbf240cf02308efdd06b636f2216b3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Computer Vision and Pattern Recognition</topic><toplevel>online_resources</toplevel><creatorcontrib>Lu, Hao</creatorcontrib><creatorcontrib>Tang, Jiaqi</creatorcontrib><creatorcontrib>Xu, Xinli</creatorcontrib><creatorcontrib>Cao, Xu</creatorcontrib><creatorcontrib>Zhang, Yunpeng</creatorcontrib><creatorcontrib>Wang, Guoqing</creatorcontrib><creatorcontrib>Du, Dalong</creatorcontrib><creatorcontrib>Chen, Hao</creatorcontrib><creatorcontrib>Chen, Yingcong</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Lu, Hao</au><au>Tang, Jiaqi</au><au>Xu, Xinli</au><au>Cao, Xu</au><au>Zhang, Yunpeng</au><au>Wang, Guoqing</au><au>Du, Dalong</au><au>Chen, Hao</au><au>Chen, Yingcong</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Scaling Multi-Camera 3D Object Detection through Weak-to-Strong Eliciting</atitle><date>2024-04-09</date><risdate>2024</risdate><abstract>The emergence of Multi-Camera 3D Object Detection (MC3D-Det), facilitated by bird's-eye view (BEV) representation, signifies a notable progression in 3D object detection. Scaling MC3D-Det training effectively accommodates varied camera parameters and urban landscapes, paving the way for the MC3D-Det foundation model. However, the multi-view fusion stage of the MC3D-Det method relies on the ill-posed monocular perception during training rather than surround refinement ability, leading to what we term "surround refinement degradation". To this end, our study presents a weak-to-strong eliciting framework aimed at enhancing surround refinement while maintaining robust monocular perception. Specifically, our framework employs weakly tuned experts trained on distinct subsets, and each is inherently biased toward specific camera configurations and scenarios. These biased experts can learn the perception of monocular degeneration, which can help the multi-view fusion stage to enhance surround refinement abilities. Moreover, a composite distillation strategy is proposed to integrate the universal knowledge of 2D foundation models and task-specific information. Finally, for MC3D-Det joint training, the elaborate dataset merge strategy is designed to solve the problem of inconsistent camera numbers and camera parameters. We set up a multiple dataset joint training benchmark for MC3D-Det and adequately evaluated existing methods. Further, we demonstrate the proposed framework brings a generalized and significant boost over multiple baselines. Our code is at \url{https://github.com/EnVision-Research/Scale-BEV}.</abstract><doi>10.48550/arxiv.2404.06700</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2404.06700
ispartof
issn
language eng
recordid cdi_arxiv_primary_2404_06700
source arXiv.org
subjects Computer Science - Computer Vision and Pattern Recognition
title Scaling Multi-Camera 3D Object Detection through Weak-to-Strong Eliciting
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-11T13%3A21%3A45IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Scaling%20Multi-Camera%203D%20Object%20Detection%20through%20Weak-to-Strong%20Eliciting&rft.au=Lu,%20Hao&rft.date=2024-04-09&rft_id=info:doi/10.48550/arxiv.2404.06700&rft_dat=%3Carxiv_GOX%3E2404_06700%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true