Object Classification Based on Enhanced Evidence Theory: Radar-Vision Fusion Approach for Roadside Application

Roadside object detection and classification provide a good understanding of driving scenarios in regard to over-the-horizon perception. However, typical roadside sensors are insufficient when used separately. The fusion of the millimeter-wave (MMW) radar and monovision camera serves as an efficient...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on instrumentation and measurement 2022, Vol.71, p.1-12
Hauptverfasser: Liu, Pengfei, Yu, Guizhen, Wang, Zhangyu, Zhou, Bin, Chen, Peng
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 12
container_issue
container_start_page 1
container_title IEEE transactions on instrumentation and measurement
container_volume 71
creator Liu, Pengfei
Yu, Guizhen
Wang, Zhangyu
Zhou, Bin
Chen, Peng
description Roadside object detection and classification provide a good understanding of driving scenarios in regard to over-the-horizon perception. However, typical roadside sensors are insufficient when used separately. The fusion of the millimeter-wave (MMW) radar and monovision camera serves as an efficient approach. Unfortunately, the uncertain and conflicting data in extreme light conditions pose challenges to the fusion process. To this end, this study proposed an evidential framework to fuse the radar and camera data. A novel modeling approach for basic belief assignments (BBAs) was proposed, which took the uncertainty of convolutional neural network (CNN) model into consideration. Moreover, the single-scan and multiscan fusion methods were developed based on the enhanced evidence theory, which utilized different weighted coefficients by referring to the reinforced belief (RB) divergence measure and belief entropy (BE). Both numerical and empirical experiments were conducted to investigate the method performance. Specifically, in numerical experiments, the belief value of actual classification increased to 99.01%. For empirical experiments, based on the real datasets collected by roadside devices, the proposed method was demonstrated to outperform the state-of-the-art ones in terms of 71.06% and 87.23% precisions for bright light and low illumination conditions, respectively. The results verify that the proposed method is effective in fusing the conflicting and uncertain data.
doi_str_mv 10.1109/TIM.2022.3154001
format Article
fullrecord <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_proquest_journals_2639932629</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9723016</ieee_id><sourcerecordid>2639932629</sourcerecordid><originalsourceid>FETCH-LOGICAL-c291t-87859d12f6528de3a878668e5bfc54e4c6c2a28639d2cff5ee047ca28d8e7653</originalsourceid><addsrcrecordid>eNo9UE1LAzEUDKJgrd4FLwuet-Zjk0281dJqoVIoi9eQJi90S92tyVbov2_WFk8zDDPzHoPQI8EjQrB6qeafI4opHTHCC4zJFRoQzstcCUGv0SApMlcFF7foLsYtxrgURTlAzXK9Bdtlk52Jsfa1NV3dNtmbieCyRKbNxjQ28elv7SCxrNpAG46v2co4E_KvOvb-2eEPxvt9aI3dZL4N2ao1LqZQr-4uxffoxptdhIcLDlE1m1aTj3yxfJ9PxovcUkW6XJaSK0eoF5xKB8wkQQgJfO0tL6CwwlJDpWDKUes9B8BFaZPiJJSCsyF6Ptemd34OEDu9bQ-hSRc1TSHFqKAqufDZZUMbYwCv96H-NuGoCdb9qDqNqvtR9WXUFHk6R2oA-LerkjJMBDsBamdzHA</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2639932629</pqid></control><display><type>article</type><title>Object Classification Based on Enhanced Evidence Theory: Radar-Vision Fusion Approach for Roadside Application</title><source>IEEE/IET Electronic Library (IEL)</source><creator>Liu, Pengfei ; Yu, Guizhen ; Wang, Zhangyu ; Zhou, Bin ; Chen, Peng</creator><creatorcontrib>Liu, Pengfei ; Yu, Guizhen ; Wang, Zhangyu ; Zhou, Bin ; Chen, Peng</creatorcontrib><description>Roadside object detection and classification provide a good understanding of driving scenarios in regard to over-the-horizon perception. However, typical roadside sensors are insufficient when used separately. The fusion of the millimeter-wave (MMW) radar and monovision camera serves as an efficient approach. Unfortunately, the uncertain and conflicting data in extreme light conditions pose challenges to the fusion process. To this end, this study proposed an evidential framework to fuse the radar and camera data. A novel modeling approach for basic belief assignments (BBAs) was proposed, which took the uncertainty of convolutional neural network (CNN) model into consideration. Moreover, the single-scan and multiscan fusion methods were developed based on the enhanced evidence theory, which utilized different weighted coefficients by referring to the reinforced belief (RB) divergence measure and belief entropy (BE). Both numerical and empirical experiments were conducted to investigate the method performance. Specifically, in numerical experiments, the belief value of actual classification increased to 99.01%. For empirical experiments, based on the real datasets collected by roadside devices, the proposed method was demonstrated to outperform the state-of-the-art ones in terms of 71.06% and 87.23% precisions for bright light and low illumination conditions, respectively. The results verify that the proposed method is effective in fusing the conflicting and uncertain data.</description><identifier>ISSN: 0018-9456</identifier><identifier>EISSN: 1557-9662</identifier><identifier>DOI: 10.1109/TIM.2022.3154001</identifier><identifier>CODEN: IEIMAO</identifier><language>eng</language><publisher>New York: IEEE</publisher><subject>Artificial neural networks ; Cameras ; Classification ; Entropy ; Evidence theory ; Experiments ; Light ; Mathematical models ; Millimeter waves ; object classification ; Object recognition ; Radar ; Radar cross-sections ; Radar detection ; roadside sensor ; Roadsides ; Sensors ; uncertainty estimation</subject><ispartof>IEEE transactions on instrumentation and measurement, 2022, Vol.71, p.1-12</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c291t-87859d12f6528de3a878668e5bfc54e4c6c2a28639d2cff5ee047ca28d8e7653</citedby><cites>FETCH-LOGICAL-c291t-87859d12f6528de3a878668e5bfc54e4c6c2a28639d2cff5ee047ca28d8e7653</cites><orcidid>0000-0002-8076-8989 ; 0000-0001-9546-7655 ; 0000-0001-8374-7422 ; 0000-0001-9017-8168 ; 0000-0002-1141-5557</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9723016$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,780,784,796,4024,27923,27924,27925,54758</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/9723016$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Liu, Pengfei</creatorcontrib><creatorcontrib>Yu, Guizhen</creatorcontrib><creatorcontrib>Wang, Zhangyu</creatorcontrib><creatorcontrib>Zhou, Bin</creatorcontrib><creatorcontrib>Chen, Peng</creatorcontrib><title>Object Classification Based on Enhanced Evidence Theory: Radar-Vision Fusion Approach for Roadside Application</title><title>IEEE transactions on instrumentation and measurement</title><addtitle>TIM</addtitle><description>Roadside object detection and classification provide a good understanding of driving scenarios in regard to over-the-horizon perception. However, typical roadside sensors are insufficient when used separately. The fusion of the millimeter-wave (MMW) radar and monovision camera serves as an efficient approach. Unfortunately, the uncertain and conflicting data in extreme light conditions pose challenges to the fusion process. To this end, this study proposed an evidential framework to fuse the radar and camera data. A novel modeling approach for basic belief assignments (BBAs) was proposed, which took the uncertainty of convolutional neural network (CNN) model into consideration. Moreover, the single-scan and multiscan fusion methods were developed based on the enhanced evidence theory, which utilized different weighted coefficients by referring to the reinforced belief (RB) divergence measure and belief entropy (BE). Both numerical and empirical experiments were conducted to investigate the method performance. Specifically, in numerical experiments, the belief value of actual classification increased to 99.01%. For empirical experiments, based on the real datasets collected by roadside devices, the proposed method was demonstrated to outperform the state-of-the-art ones in terms of 71.06% and 87.23% precisions for bright light and low illumination conditions, respectively. The results verify that the proposed method is effective in fusing the conflicting and uncertain data.</description><subject>Artificial neural networks</subject><subject>Cameras</subject><subject>Classification</subject><subject>Entropy</subject><subject>Evidence theory</subject><subject>Experiments</subject><subject>Light</subject><subject>Mathematical models</subject><subject>Millimeter waves</subject><subject>object classification</subject><subject>Object recognition</subject><subject>Radar</subject><subject>Radar cross-sections</subject><subject>Radar detection</subject><subject>roadside sensor</subject><subject>Roadsides</subject><subject>Sensors</subject><subject>uncertainty estimation</subject><issn>0018-9456</issn><issn>1557-9662</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNo9UE1LAzEUDKJgrd4FLwuet-Zjk0281dJqoVIoi9eQJi90S92tyVbov2_WFk8zDDPzHoPQI8EjQrB6qeafI4opHTHCC4zJFRoQzstcCUGv0SApMlcFF7foLsYtxrgURTlAzXK9Bdtlk52Jsfa1NV3dNtmbieCyRKbNxjQ28elv7SCxrNpAG46v2co4E_KvOvb-2eEPxvt9aI3dZL4N2ao1LqZQr-4uxffoxptdhIcLDlE1m1aTj3yxfJ9PxovcUkW6XJaSK0eoF5xKB8wkQQgJfO0tL6CwwlJDpWDKUes9B8BFaZPiJJSCsyF6Ptemd34OEDu9bQ-hSRc1TSHFqKAqufDZZUMbYwCv96H-NuGoCdb9qDqNqvtR9WXUFHk6R2oA-LerkjJMBDsBamdzHA</recordid><startdate>2022</startdate><enddate>2022</enddate><creator>Liu, Pengfei</creator><creator>Yu, Guizhen</creator><creator>Wang, Zhangyu</creator><creator>Zhou, Bin</creator><creator>Chen, Peng</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SP</scope><scope>7U5</scope><scope>8FD</scope><scope>L7M</scope><orcidid>https://orcid.org/0000-0002-8076-8989</orcidid><orcidid>https://orcid.org/0000-0001-9546-7655</orcidid><orcidid>https://orcid.org/0000-0001-8374-7422</orcidid><orcidid>https://orcid.org/0000-0001-9017-8168</orcidid><orcidid>https://orcid.org/0000-0002-1141-5557</orcidid></search><sort><creationdate>2022</creationdate><title>Object Classification Based on Enhanced Evidence Theory: Radar-Vision Fusion Approach for Roadside Application</title><author>Liu, Pengfei ; Yu, Guizhen ; Wang, Zhangyu ; Zhou, Bin ; Chen, Peng</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c291t-87859d12f6528de3a878668e5bfc54e4c6c2a28639d2cff5ee047ca28d8e7653</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Artificial neural networks</topic><topic>Cameras</topic><topic>Classification</topic><topic>Entropy</topic><topic>Evidence theory</topic><topic>Experiments</topic><topic>Light</topic><topic>Mathematical models</topic><topic>Millimeter waves</topic><topic>object classification</topic><topic>Object recognition</topic><topic>Radar</topic><topic>Radar cross-sections</topic><topic>Radar detection</topic><topic>roadside sensor</topic><topic>Roadsides</topic><topic>Sensors</topic><topic>uncertainty estimation</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Liu, Pengfei</creatorcontrib><creatorcontrib>Yu, Guizhen</creatorcontrib><creatorcontrib>Wang, Zhangyu</creatorcontrib><creatorcontrib>Zhou, Bin</creatorcontrib><creatorcontrib>Chen, Peng</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005–Present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998–Present</collection><collection>IEEE/IET Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Solid State and Superconductivity Abstracts</collection><collection>Technology Research Database</collection><collection>Advanced Technologies Database with Aerospace</collection><jtitle>IEEE transactions on instrumentation and measurement</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Liu, Pengfei</au><au>Yu, Guizhen</au><au>Wang, Zhangyu</au><au>Zhou, Bin</au><au>Chen, Peng</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Object Classification Based on Enhanced Evidence Theory: Radar-Vision Fusion Approach for Roadside Application</atitle><jtitle>IEEE transactions on instrumentation and measurement</jtitle><stitle>TIM</stitle><date>2022</date><risdate>2022</risdate><volume>71</volume><spage>1</spage><epage>12</epage><pages>1-12</pages><issn>0018-9456</issn><eissn>1557-9662</eissn><coden>IEIMAO</coden><abstract>Roadside object detection and classification provide a good understanding of driving scenarios in regard to over-the-horizon perception. However, typical roadside sensors are insufficient when used separately. The fusion of the millimeter-wave (MMW) radar and monovision camera serves as an efficient approach. Unfortunately, the uncertain and conflicting data in extreme light conditions pose challenges to the fusion process. To this end, this study proposed an evidential framework to fuse the radar and camera data. A novel modeling approach for basic belief assignments (BBAs) was proposed, which took the uncertainty of convolutional neural network (CNN) model into consideration. Moreover, the single-scan and multiscan fusion methods were developed based on the enhanced evidence theory, which utilized different weighted coefficients by referring to the reinforced belief (RB) divergence measure and belief entropy (BE). Both numerical and empirical experiments were conducted to investigate the method performance. Specifically, in numerical experiments, the belief value of actual classification increased to 99.01%. For empirical experiments, based on the real datasets collected by roadside devices, the proposed method was demonstrated to outperform the state-of-the-art ones in terms of 71.06% and 87.23% precisions for bright light and low illumination conditions, respectively. The results verify that the proposed method is effective in fusing the conflicting and uncertain data.</abstract><cop>New York</cop><pub>IEEE</pub><doi>10.1109/TIM.2022.3154001</doi><tpages>12</tpages><orcidid>https://orcid.org/0000-0002-8076-8989</orcidid><orcidid>https://orcid.org/0000-0001-9546-7655</orcidid><orcidid>https://orcid.org/0000-0001-8374-7422</orcidid><orcidid>https://orcid.org/0000-0001-9017-8168</orcidid><orcidid>https://orcid.org/0000-0002-1141-5557</orcidid></addata></record>
fulltext fulltext_linktorsrc
identifier ISSN: 0018-9456
ispartof IEEE transactions on instrumentation and measurement, 2022, Vol.71, p.1-12
issn 0018-9456
1557-9662
language eng
recordid cdi_proquest_journals_2639932629
source IEEE/IET Electronic Library (IEL)
subjects Artificial neural networks
Cameras
Classification
Entropy
Evidence theory
Experiments
Light
Mathematical models
Millimeter waves
object classification
Object recognition
Radar
Radar cross-sections
Radar detection
roadside sensor
Roadsides
Sensors
uncertainty estimation
title Object Classification Based on Enhanced Evidence Theory: Radar-Vision Fusion Approach for Roadside Application
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-07T19%3A52%3A33IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Object%20Classification%20Based%20on%20Enhanced%20Evidence%20Theory:%20Radar-Vision%20Fusion%20Approach%20for%20Roadside%20Application&rft.jtitle=IEEE%20transactions%20on%20instrumentation%20and%20measurement&rft.au=Liu,%20Pengfei&rft.date=2022&rft.volume=71&rft.spage=1&rft.epage=12&rft.pages=1-12&rft.issn=0018-9456&rft.eissn=1557-9662&rft.coden=IEIMAO&rft_id=info:doi/10.1109/TIM.2022.3154001&rft_dat=%3Cproquest_RIE%3E2639932629%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2639932629&rft_id=info:pmid/&rft_ieee_id=9723016&rfr_iscdi=true