Slim-FCP: Lightweight-Feature-Based Cooperative Perception for Connected Automated Vehicles

Cooperative perception provides a novel way to conquer the sensing limitation on a single automated vehicle and potentially improves driving safety. To reduce the transmission data volume, existing solutions use the intermediate data generated by convolutional neural network (CNN) models, namely, fe...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE internet of things journal 2022-09, Vol.9 (17), p.15630-15638
Hauptverfasser: Guo, Jingda, Carrillo, Dominic, Chen, Qi, Yang, Qing, Fu, Song, Lu, Hongsheng, Guo, Rui
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 15638
container_issue 17
container_start_page 15630
container_title IEEE internet of things journal
container_volume 9
creator Guo, Jingda
Carrillo, Dominic
Chen, Qi
Yang, Qing
Fu, Song
Lu, Hongsheng
Guo, Rui
description Cooperative perception provides a novel way to conquer the sensing limitation on a single automated vehicle and potentially improves driving safety. To reduce the transmission data volume, existing solutions use the intermediate data generated by convolutional neural network (CNN) models, namely, feature maps, to achieve cooperative perception. The feature maps are however too large to be transmitted by the current V2X technology. We propose a novel approach, called Slim-FCP, to significantly reduce the transmission data size. It enables a channelwise feature encoder to remove irrelevant features for a better compression ratio. In addition, it adopts an intelligent channel selection strategy through which only representative channels of feature maps are selected for transmission. To evaluate the effectiveness of Slim-FCP, we further define a recall-to-bandwidth (RB) ratio metric to quantitatively measure how the recall of object detection changes with respect to the available network bandwidth. Experiment results show that Slim-FCP reduces the transmission data size by 75%, compared with the best state-of-the-art solution, with a subtle loss on object detection's recall.
doi_str_mv 10.1109/JIOT.2022.3153260
format Article
fullrecord <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_proquest_journals_2705854002</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9718512</ieee_id><sourcerecordid>2705854002</sourcerecordid><originalsourceid>FETCH-LOGICAL-c336t-b8fb6e0197c411df403d9fe2d8e21c30ec31a7891c620d1c01a1a08f5970fe643</originalsourceid><addsrcrecordid>eNpNkMFKw0AQhoMoWLQPIF4CnlNndpNN4q0Wq5VCC1YvHpbtZtamtNm42Si-vQkt4mVmYL5_Br4guEIYIUJ--zxbrEYMGBtxTDgTcBIMGGdpFAvBTv_N58GwabYA0MUSzMUgeH_ZlftoOlnehfPyY-O_qa_RlJRvHUX3qqEinFhbk1O-_KJwSU5T7Utbhca6blVVpH0HjVtv96qf3mhT6h01l8GZUbuGhsd-EbxOH1aTp2i-eJxNxvNIcy58tM7MWhBgnuoYsTAx8CI3xIqMGGoOpDmqNMtRCwYFakCFCjKT5CkYEjG_CG4Od2tnP1tqvNza1lXdS8lSSLIkBmAdhQdKO9s0joysXblX7kciyF6j7DXKXqM8auwy14dMSUR_fJ5iliDjv-spbYw</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2705854002</pqid></control><display><type>article</type><title>Slim-FCP: Lightweight-Feature-Based Cooperative Perception for Connected Automated Vehicles</title><source>IEEE Electronic Library (IEL)</source><creator>Guo, Jingda ; Carrillo, Dominic ; Chen, Qi ; Yang, Qing ; Fu, Song ; Lu, Hongsheng ; Guo, Rui</creator><creatorcontrib>Guo, Jingda ; Carrillo, Dominic ; Chen, Qi ; Yang, Qing ; Fu, Song ; Lu, Hongsheng ; Guo, Rui</creatorcontrib><description>Cooperative perception provides a novel way to conquer the sensing limitation on a single automated vehicle and potentially improves driving safety. To reduce the transmission data volume, existing solutions use the intermediate data generated by convolutional neural network (CNN) models, namely, feature maps, to achieve cooperative perception. The feature maps are however too large to be transmitted by the current V2X technology. We propose a novel approach, called Slim-FCP, to significantly reduce the transmission data size. It enables a channelwise feature encoder to remove irrelevant features for a better compression ratio. In addition, it adopts an intelligent channel selection strategy through which only representative channels of feature maps are selected for transmission. To evaluate the effectiveness of Slim-FCP, we further define a recall-to-bandwidth (RB) ratio metric to quantitatively measure how the recall of object detection changes with respect to the available network bandwidth. Experiment results show that Slim-FCP reduces the transmission data size by 75%, compared with the best state-of-the-art solution, with a subtle loss on object detection's recall.</description><identifier>ISSN: 2327-4662</identifier><identifier>EISSN: 2327-4662</identifier><identifier>DOI: 10.1109/JIOT.2022.3153260</identifier><identifier>CODEN: IITJAU</identifier><language>eng</language><publisher>Piscataway: IEEE</publisher><subject>3-D object detection ; Artificial neural networks ; automated vehicles (AVs) ; Automation ; Bandwidths ; Coders ; Compression ratio ; Convolution ; cooperative perception ; Cooperative processing ; Decoding ; Feature extraction ; feature fusion ; Feature maps ; Object detection ; Object recognition ; Perception ; Recall ; Receivers ; Semantics ; Task analysis ; Vehicle safety</subject><ispartof>IEEE internet of things journal, 2022-09, Vol.9 (17), p.15630-15638</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c336t-b8fb6e0197c411df403d9fe2d8e21c30ec31a7891c620d1c01a1a08f5970fe643</citedby><cites>FETCH-LOGICAL-c336t-b8fb6e0197c411df403d9fe2d8e21c30ec31a7891c620d1c01a1a08f5970fe643</cites><orcidid>0000-0002-0683-5848 ; 0000-0002-1057-1099 ; 0000-0002-7705-0829 ; 0000-0002-6967-0024</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9718512$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,776,780,792,27903,27904,54736</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/9718512$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Guo, Jingda</creatorcontrib><creatorcontrib>Carrillo, Dominic</creatorcontrib><creatorcontrib>Chen, Qi</creatorcontrib><creatorcontrib>Yang, Qing</creatorcontrib><creatorcontrib>Fu, Song</creatorcontrib><creatorcontrib>Lu, Hongsheng</creatorcontrib><creatorcontrib>Guo, Rui</creatorcontrib><title>Slim-FCP: Lightweight-Feature-Based Cooperative Perception for Connected Automated Vehicles</title><title>IEEE internet of things journal</title><addtitle>JIoT</addtitle><description>Cooperative perception provides a novel way to conquer the sensing limitation on a single automated vehicle and potentially improves driving safety. To reduce the transmission data volume, existing solutions use the intermediate data generated by convolutional neural network (CNN) models, namely, feature maps, to achieve cooperative perception. The feature maps are however too large to be transmitted by the current V2X technology. We propose a novel approach, called Slim-FCP, to significantly reduce the transmission data size. It enables a channelwise feature encoder to remove irrelevant features for a better compression ratio. In addition, it adopts an intelligent channel selection strategy through which only representative channels of feature maps are selected for transmission. To evaluate the effectiveness of Slim-FCP, we further define a recall-to-bandwidth (RB) ratio metric to quantitatively measure how the recall of object detection changes with respect to the available network bandwidth. Experiment results show that Slim-FCP reduces the transmission data size by 75%, compared with the best state-of-the-art solution, with a subtle loss on object detection's recall.</description><subject>3-D object detection</subject><subject>Artificial neural networks</subject><subject>automated vehicles (AVs)</subject><subject>Automation</subject><subject>Bandwidths</subject><subject>Coders</subject><subject>Compression ratio</subject><subject>Convolution</subject><subject>cooperative perception</subject><subject>Cooperative processing</subject><subject>Decoding</subject><subject>Feature extraction</subject><subject>feature fusion</subject><subject>Feature maps</subject><subject>Object detection</subject><subject>Object recognition</subject><subject>Perception</subject><subject>Recall</subject><subject>Receivers</subject><subject>Semantics</subject><subject>Task analysis</subject><subject>Vehicle safety</subject><issn>2327-4662</issn><issn>2327-4662</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNpNkMFKw0AQhoMoWLQPIF4CnlNndpNN4q0Wq5VCC1YvHpbtZtamtNm42Si-vQkt4mVmYL5_Br4guEIYIUJ--zxbrEYMGBtxTDgTcBIMGGdpFAvBTv_N58GwabYA0MUSzMUgeH_ZlftoOlnehfPyY-O_qa_RlJRvHUX3qqEinFhbk1O-_KJwSU5T7Utbhca6blVVpH0HjVtv96qf3mhT6h01l8GZUbuGhsd-EbxOH1aTp2i-eJxNxvNIcy58tM7MWhBgnuoYsTAx8CI3xIqMGGoOpDmqNMtRCwYFakCFCjKT5CkYEjG_CG4Od2tnP1tqvNza1lXdS8lSSLIkBmAdhQdKO9s0joysXblX7kciyF6j7DXKXqM8auwy14dMSUR_fJ5iliDjv-spbYw</recordid><startdate>20220901</startdate><enddate>20220901</enddate><creator>Guo, Jingda</creator><creator>Carrillo, Dominic</creator><creator>Chen, Qi</creator><creator>Yang, Qing</creator><creator>Fu, Song</creator><creator>Lu, Hongsheng</creator><creator>Guo, Rui</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><orcidid>https://orcid.org/0000-0002-0683-5848</orcidid><orcidid>https://orcid.org/0000-0002-1057-1099</orcidid><orcidid>https://orcid.org/0000-0002-7705-0829</orcidid><orcidid>https://orcid.org/0000-0002-6967-0024</orcidid></search><sort><creationdate>20220901</creationdate><title>Slim-FCP: Lightweight-Feature-Based Cooperative Perception for Connected Automated Vehicles</title><author>Guo, Jingda ; Carrillo, Dominic ; Chen, Qi ; Yang, Qing ; Fu, Song ; Lu, Hongsheng ; Guo, Rui</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c336t-b8fb6e0197c411df403d9fe2d8e21c30ec31a7891c620d1c01a1a08f5970fe643</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>3-D object detection</topic><topic>Artificial neural networks</topic><topic>automated vehicles (AVs)</topic><topic>Automation</topic><topic>Bandwidths</topic><topic>Coders</topic><topic>Compression ratio</topic><topic>Convolution</topic><topic>cooperative perception</topic><topic>Cooperative processing</topic><topic>Decoding</topic><topic>Feature extraction</topic><topic>feature fusion</topic><topic>Feature maps</topic><topic>Object detection</topic><topic>Object recognition</topic><topic>Perception</topic><topic>Recall</topic><topic>Receivers</topic><topic>Semantics</topic><topic>Task analysis</topic><topic>Vehicle safety</topic><toplevel>online_resources</toplevel><creatorcontrib>Guo, Jingda</creatorcontrib><creatorcontrib>Carrillo, Dominic</creatorcontrib><creatorcontrib>Chen, Qi</creatorcontrib><creatorcontrib>Yang, Qing</creatorcontrib><creatorcontrib>Fu, Song</creatorcontrib><creatorcontrib>Lu, Hongsheng</creatorcontrib><creatorcontrib>Guo, Rui</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>IEEE internet of things journal</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Guo, Jingda</au><au>Carrillo, Dominic</au><au>Chen, Qi</au><au>Yang, Qing</au><au>Fu, Song</au><au>Lu, Hongsheng</au><au>Guo, Rui</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Slim-FCP: Lightweight-Feature-Based Cooperative Perception for Connected Automated Vehicles</atitle><jtitle>IEEE internet of things journal</jtitle><stitle>JIoT</stitle><date>2022-09-01</date><risdate>2022</risdate><volume>9</volume><issue>17</issue><spage>15630</spage><epage>15638</epage><pages>15630-15638</pages><issn>2327-4662</issn><eissn>2327-4662</eissn><coden>IITJAU</coden><abstract>Cooperative perception provides a novel way to conquer the sensing limitation on a single automated vehicle and potentially improves driving safety. To reduce the transmission data volume, existing solutions use the intermediate data generated by convolutional neural network (CNN) models, namely, feature maps, to achieve cooperative perception. The feature maps are however too large to be transmitted by the current V2X technology. We propose a novel approach, called Slim-FCP, to significantly reduce the transmission data size. It enables a channelwise feature encoder to remove irrelevant features for a better compression ratio. In addition, it adopts an intelligent channel selection strategy through which only representative channels of feature maps are selected for transmission. To evaluate the effectiveness of Slim-FCP, we further define a recall-to-bandwidth (RB) ratio metric to quantitatively measure how the recall of object detection changes with respect to the available network bandwidth. Experiment results show that Slim-FCP reduces the transmission data size by 75%, compared with the best state-of-the-art solution, with a subtle loss on object detection's recall.</abstract><cop>Piscataway</cop><pub>IEEE</pub><doi>10.1109/JIOT.2022.3153260</doi><tpages>9</tpages><orcidid>https://orcid.org/0000-0002-0683-5848</orcidid><orcidid>https://orcid.org/0000-0002-1057-1099</orcidid><orcidid>https://orcid.org/0000-0002-7705-0829</orcidid><orcidid>https://orcid.org/0000-0002-6967-0024</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier ISSN: 2327-4662
ispartof IEEE internet of things journal, 2022-09, Vol.9 (17), p.15630-15638
issn 2327-4662
2327-4662
language eng
recordid cdi_proquest_journals_2705854002
source IEEE Electronic Library (IEL)
subjects 3-D object detection
Artificial neural networks
automated vehicles (AVs)
Automation
Bandwidths
Coders
Compression ratio
Convolution
cooperative perception
Cooperative processing
Decoding
Feature extraction
feature fusion
Feature maps
Object detection
Object recognition
Perception
Recall
Receivers
Semantics
Task analysis
Vehicle safety
title Slim-FCP: Lightweight-Feature-Based Cooperative Perception for Connected Automated Vehicles
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-24T04%3A05%3A43IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Slim-FCP:%20Lightweight-Feature-Based%20Cooperative%20Perception%20for%20Connected%20Automated%20Vehicles&rft.jtitle=IEEE%20internet%20of%20things%20journal&rft.au=Guo,%20Jingda&rft.date=2022-09-01&rft.volume=9&rft.issue=17&rft.spage=15630&rft.epage=15638&rft.pages=15630-15638&rft.issn=2327-4662&rft.eissn=2327-4662&rft.coden=IITJAU&rft_id=info:doi/10.1109/JIOT.2022.3153260&rft_dat=%3Cproquest_RIE%3E2705854002%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2705854002&rft_id=info:pmid/&rft_ieee_id=9718512&rfr_iscdi=true