CooPre: Cooperative Pretraining for V2X Cooperative Perception
Existing Vehicle-to-Everything (V2X) cooperative perception methods rely on accurate multi-agent 3D annotations. Nevertheless, it is time-consuming and expensive to collect and annotate real-world data, especially for V2X systems. In this paper, we present a self-supervised learning method for V2X c...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Zhao, Seth Z Xiang, Hao Xu, Chenfeng Xia, Xin Zhou, Bolei Ma, Jiaqi |
description | Existing Vehicle-to-Everything (V2X) cooperative perception methods rely on
accurate multi-agent 3D annotations. Nevertheless, it is time-consuming and
expensive to collect and annotate real-world data, especially for V2X systems.
In this paper, we present a self-supervised learning method for V2X cooperative
perception, which utilizes the vast amount of unlabeled 3D V2X data to enhance
the perception performance. Beyond simply extending the previous pre-training
methods for point-cloud representation learning, we introduce a novel
self-supervised Cooperative Pretraining framework (termed as CooPre) customized
for a collaborative scenario. We point out that cooperative point-cloud sensing
compensates for information loss among agents. This motivates us to design a
novel proxy task for the 3D encoder to reconstruct LiDAR point clouds across
different agents. Besides, we develop a V2X bird-eye-view (BEV) guided masking
strategy which effectively allows the model to pay attention to 3D features
across heterogeneous V2X agents (i.e., vehicles and infrastructure) in the BEV
space. Noticeably, such a masking strategy effectively pretrains the 3D encoder
and is compatible with mainstream cooperative perception backbones. Our
approach, validated through extensive experiments on representative datasets
(i.e., V2X-Real, V2V4Real, and OPV2V), leads to a performance boost across all
V2X settings. Additionally, we demonstrate the framework's improvements in
cross-domain transferability, data efficiency, and robustness under challenging
scenarios. The code will be made publicly available. |
doi_str_mv | 10.48550/arxiv.2408.11241 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2408_11241</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2408_11241</sourcerecordid><originalsourceid>FETCH-arxiv_primary_2408_112413</originalsourceid><addsrcrecordid>eNpjYJA0NNAzsTA1NdBPLKrILNMzMjGw0DM0NDIx5GSwc87PDyhKtVIA0gWpRYklmWWpCkCBkqLEzLzMvHSFtPwihTCjCFT51KLk1IKSzPw8HgbWtMSc4lReKM3NIO_mGuLsoQu2Kb6gKDM3sagyHmRjPNhGY8IqAFyZNb4</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>CooPre: Cooperative Pretraining for V2X Cooperative Perception</title><source>arXiv.org</source><creator>Zhao, Seth Z ; Xiang, Hao ; Xu, Chenfeng ; Xia, Xin ; Zhou, Bolei ; Ma, Jiaqi</creator><creatorcontrib>Zhao, Seth Z ; Xiang, Hao ; Xu, Chenfeng ; Xia, Xin ; Zhou, Bolei ; Ma, Jiaqi</creatorcontrib><description>Existing Vehicle-to-Everything (V2X) cooperative perception methods rely on
accurate multi-agent 3D annotations. Nevertheless, it is time-consuming and
expensive to collect and annotate real-world data, especially for V2X systems.
In this paper, we present a self-supervised learning method for V2X cooperative
perception, which utilizes the vast amount of unlabeled 3D V2X data to enhance
the perception performance. Beyond simply extending the previous pre-training
methods for point-cloud representation learning, we introduce a novel
self-supervised Cooperative Pretraining framework (termed as CooPre) customized
for a collaborative scenario. We point out that cooperative point-cloud sensing
compensates for information loss among agents. This motivates us to design a
novel proxy task for the 3D encoder to reconstruct LiDAR point clouds across
different agents. Besides, we develop a V2X bird-eye-view (BEV) guided masking
strategy which effectively allows the model to pay attention to 3D features
across heterogeneous V2X agents (i.e., vehicles and infrastructure) in the BEV
space. Noticeably, such a masking strategy effectively pretrains the 3D encoder
and is compatible with mainstream cooperative perception backbones. Our
approach, validated through extensive experiments on representative datasets
(i.e., V2X-Real, V2V4Real, and OPV2V), leads to a performance boost across all
V2X settings. Additionally, we demonstrate the framework's improvements in
cross-domain transferability, data efficiency, and robustness under challenging
scenarios. The code will be made publicly available.</description><identifier>DOI: 10.48550/arxiv.2408.11241</identifier><language>eng</language><subject>Computer Science - Computer Vision and Pattern Recognition</subject><creationdate>2024-08</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2408.11241$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2408.11241$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Zhao, Seth Z</creatorcontrib><creatorcontrib>Xiang, Hao</creatorcontrib><creatorcontrib>Xu, Chenfeng</creatorcontrib><creatorcontrib>Xia, Xin</creatorcontrib><creatorcontrib>Zhou, Bolei</creatorcontrib><creatorcontrib>Ma, Jiaqi</creatorcontrib><title>CooPre: Cooperative Pretraining for V2X Cooperative Perception</title><description>Existing Vehicle-to-Everything (V2X) cooperative perception methods rely on
accurate multi-agent 3D annotations. Nevertheless, it is time-consuming and
expensive to collect and annotate real-world data, especially for V2X systems.
In this paper, we present a self-supervised learning method for V2X cooperative
perception, which utilizes the vast amount of unlabeled 3D V2X data to enhance
the perception performance. Beyond simply extending the previous pre-training
methods for point-cloud representation learning, we introduce a novel
self-supervised Cooperative Pretraining framework (termed as CooPre) customized
for a collaborative scenario. We point out that cooperative point-cloud sensing
compensates for information loss among agents. This motivates us to design a
novel proxy task for the 3D encoder to reconstruct LiDAR point clouds across
different agents. Besides, we develop a V2X bird-eye-view (BEV) guided masking
strategy which effectively allows the model to pay attention to 3D features
across heterogeneous V2X agents (i.e., vehicles and infrastructure) in the BEV
space. Noticeably, such a masking strategy effectively pretrains the 3D encoder
and is compatible with mainstream cooperative perception backbones. Our
approach, validated through extensive experiments on representative datasets
(i.e., V2X-Real, V2V4Real, and OPV2V), leads to a performance boost across all
V2X settings. Additionally, we demonstrate the framework's improvements in
cross-domain transferability, data efficiency, and robustness under challenging
scenarios. The code will be made publicly available.</description><subject>Computer Science - Computer Vision and Pattern Recognition</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNpjYJA0NNAzsTA1NdBPLKrILNMzMjGw0DM0NDIx5GSwc87PDyhKtVIA0gWpRYklmWWpCkCBkqLEzLzMvHSFtPwihTCjCFT51KLk1IKSzPw8HgbWtMSc4lReKM3NIO_mGuLsoQu2Kb6gKDM3sagyHmRjPNhGY8IqAFyZNb4</recordid><startdate>20240820</startdate><enddate>20240820</enddate><creator>Zhao, Seth Z</creator><creator>Xiang, Hao</creator><creator>Xu, Chenfeng</creator><creator>Xia, Xin</creator><creator>Zhou, Bolei</creator><creator>Ma, Jiaqi</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20240820</creationdate><title>CooPre: Cooperative Pretraining for V2X Cooperative Perception</title><author>Zhao, Seth Z ; Xiang, Hao ; Xu, Chenfeng ; Xia, Xin ; Zhou, Bolei ; Ma, Jiaqi</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-arxiv_primary_2408_112413</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Computer Vision and Pattern Recognition</topic><toplevel>online_resources</toplevel><creatorcontrib>Zhao, Seth Z</creatorcontrib><creatorcontrib>Xiang, Hao</creatorcontrib><creatorcontrib>Xu, Chenfeng</creatorcontrib><creatorcontrib>Xia, Xin</creatorcontrib><creatorcontrib>Zhou, Bolei</creatorcontrib><creatorcontrib>Ma, Jiaqi</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Zhao, Seth Z</au><au>Xiang, Hao</au><au>Xu, Chenfeng</au><au>Xia, Xin</au><au>Zhou, Bolei</au><au>Ma, Jiaqi</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>CooPre: Cooperative Pretraining for V2X Cooperative Perception</atitle><date>2024-08-20</date><risdate>2024</risdate><abstract>Existing Vehicle-to-Everything (V2X) cooperative perception methods rely on
accurate multi-agent 3D annotations. Nevertheless, it is time-consuming and
expensive to collect and annotate real-world data, especially for V2X systems.
In this paper, we present a self-supervised learning method for V2X cooperative
perception, which utilizes the vast amount of unlabeled 3D V2X data to enhance
the perception performance. Beyond simply extending the previous pre-training
methods for point-cloud representation learning, we introduce a novel
self-supervised Cooperative Pretraining framework (termed as CooPre) customized
for a collaborative scenario. We point out that cooperative point-cloud sensing
compensates for information loss among agents. This motivates us to design a
novel proxy task for the 3D encoder to reconstruct LiDAR point clouds across
different agents. Besides, we develop a V2X bird-eye-view (BEV) guided masking
strategy which effectively allows the model to pay attention to 3D features
across heterogeneous V2X agents (i.e., vehicles and infrastructure) in the BEV
space. Noticeably, such a masking strategy effectively pretrains the 3D encoder
and is compatible with mainstream cooperative perception backbones. Our
approach, validated through extensive experiments on representative datasets
(i.e., V2X-Real, V2V4Real, and OPV2V), leads to a performance boost across all
V2X settings. Additionally, we demonstrate the framework's improvements in
cross-domain transferability, data efficiency, and robustness under challenging
scenarios. The code will be made publicly available.</abstract><doi>10.48550/arxiv.2408.11241</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.2408.11241 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_2408_11241 |
source | arXiv.org |
subjects | Computer Science - Computer Vision and Pattern Recognition |
title | CooPre: Cooperative Pretraining for V2X Cooperative Perception |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-08T12%3A38%3A36IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=CooPre:%20Cooperative%20Pretraining%20for%20V2X%20Cooperative%20Perception&rft.au=Zhao,%20Seth%20Z&rft.date=2024-08-20&rft_id=info:doi/10.48550/arxiv.2408.11241&rft_dat=%3Carxiv_GOX%3E2408_11241%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |