3D Multi-Object Tracking Based on Dual-Tracker and D-S Evidence Theory

Most of the current self-driving cars are equipped with a variety of sensors (such as lidar, camera), and it has become a trend to integrate multi-sensor information to achieve 3D multi-object tracking (MOT). In addition, unlike the perfect experimental conditions on public datasets, cars on real ro...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on intelligent vehicles 2023-03, Vol.8 (3), p.2426-2436
Hauptverfasser: Ma, Yuanzhi, Zhang, Jindong, Qin, Guihe, Jin, Jingyi, Zhang, Kunpeng, Pan, Dongyu, Chen, Mai
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 2436
container_issue 3
container_start_page 2426
container_title IEEE transactions on intelligent vehicles
container_volume 8
creator Ma, Yuanzhi
Zhang, Jindong
Qin, Guihe
Jin, Jingyi
Zhang, Kunpeng
Pan, Dongyu
Chen, Mai
description Most of the current self-driving cars are equipped with a variety of sensors (such as lidar, camera), and it has become a trend to integrate multi-sensor information to achieve 3D multi-object tracking (MOT). In addition, unlike the perfect experimental conditions on public datasets, cars on real roads may experience a 3D sensor failure due to weather, mutual interference, etc. In this paper, we propose a dual-tracker-based 3D MOT system, which fuses 2D and 3D information to track objects and can still ensure good tracking accuracy when the 3D sensor fails in a single frame or multiple consecutive frames. Among them, two internal trackers complete the data association of 3D and 2D information, respectively. However, the outputs of the two internal trackers may conflict. First, we calculate the degree of association of potential matched pairs from the features extracted by each sensor, and then normalize it into the mass function of D-S evidence theory. Finally, we combine the evidence to obtain the final matched pairs of detection instances and track estimates. We do extensive experiments on the KITTI MOT dataset and simulate sensor failure scenarios by ignoring the output of the 3D detector. Using the latest evaluation measure for comparison, the results show that our method outperforms other advanced open-source 3D MOTs in a variety of scenarios.
doi_str_mv 10.1109/TIV.2022.3216102
format Article
fullrecord <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_ieee_primary_9925631</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9925631</ieee_id><sourcerecordid>2807129584</sourcerecordid><originalsourceid>FETCH-LOGICAL-c221t-118eb33533cfcea71222224c0394f0c54c4e1cf4c6521b7a772a382354df258f3</originalsourceid><addsrcrecordid>eNo9kM1PwzAMxSMEEhPsjsQlEueMxEna5Aj7gElDO1C4RlnqQsdoR9oi7b-nYwNfbFnvPcs_Qq4EHwnB7W02fx0BBxhJEIngcEIGIFPLjOXq9G822pyTYdOsOeciMWC4HZCZnNCnbtOWbLlaY2hpFn34KKs3eu8bzGld0UnnN-x3jZH6KqcT9kyn32WOVUCavWMdd5fkrPCbBofHfkFeZtNs_MgWy4f5-G7BAoBomRAGV1JqKUMR0KcC9qUCl1YVPGgVFIpQqJBoEKvUpyl4aUBqlRegTSEvyM0hdxvrrw6b1q3rLlb9Sdf_0-dZbVSv4gdViHXTRCzcNpafPu6c4G4PzPXA3B6YOwLrLdcHS4mI_3JrQSdSyB_8qGMk</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2807129584</pqid></control><display><type>article</type><title>3D Multi-Object Tracking Based on Dual-Tracker and D-S Evidence Theory</title><source>IEEE Electronic Library (IEL)</source><creator>Ma, Yuanzhi ; Zhang, Jindong ; Qin, Guihe ; Jin, Jingyi ; Zhang, Kunpeng ; Pan, Dongyu ; Chen, Mai</creator><creatorcontrib>Ma, Yuanzhi ; Zhang, Jindong ; Qin, Guihe ; Jin, Jingyi ; Zhang, Kunpeng ; Pan, Dongyu ; Chen, Mai</creatorcontrib><description>Most of the current self-driving cars are equipped with a variety of sensors (such as lidar, camera), and it has become a trend to integrate multi-sensor information to achieve 3D multi-object tracking (MOT). In addition, unlike the perfect experimental conditions on public datasets, cars on real roads may experience a 3D sensor failure due to weather, mutual interference, etc. In this paper, we propose a dual-tracker-based 3D MOT system, which fuses 2D and 3D information to track objects and can still ensure good tracking accuracy when the 3D sensor fails in a single frame or multiple consecutive frames. Among them, two internal trackers complete the data association of 3D and 2D information, respectively. However, the outputs of the two internal trackers may conflict. First, we calculate the degree of association of potential matched pairs from the features extracted by each sensor, and then normalize it into the mass function of D-S evidence theory. Finally, we combine the evidence to obtain the final matched pairs of detection instances and track estimates. We do extensive experiments on the KITTI MOT dataset and simulate sensor failure scenarios by ignoring the output of the 3D detector. Using the latest evaluation measure for comparison, the results show that our method outperforms other advanced open-source 3D MOTs in a variety of scenarios.</description><identifier>ISSN: 2379-8858</identifier><identifier>EISSN: 2379-8904</identifier><identifier>DOI: 10.1109/TIV.2022.3216102</identifier><identifier>CODEN: ITIVBL</identifier><language>eng</language><publisher>Piscataway: IEEE</publisher><subject>3D multi-object tracking ; Automobiles ; Autonomous cars ; D-S evidence theory ; Datasets ; Detectors ; Evidence theory ; Image edge detection ; multi-sensor fusion ; Multiple target tracking ; Sensors ; Three-dimensional displays ; Trajectory</subject><ispartof>IEEE transactions on intelligent vehicles, 2023-03, Vol.8 (3), p.2426-2436</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2023</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c221t-118eb33533cfcea71222224c0394f0c54c4e1cf4c6521b7a772a382354df258f3</citedby><cites>FETCH-LOGICAL-c221t-118eb33533cfcea71222224c0394f0c54c4e1cf4c6521b7a772a382354df258f3</cites><orcidid>0000-0002-5660-1281 ; 0000-0002-2472-4324</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9925631$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,776,780,792,27903,27904,54736</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/9925631$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Ma, Yuanzhi</creatorcontrib><creatorcontrib>Zhang, Jindong</creatorcontrib><creatorcontrib>Qin, Guihe</creatorcontrib><creatorcontrib>Jin, Jingyi</creatorcontrib><creatorcontrib>Zhang, Kunpeng</creatorcontrib><creatorcontrib>Pan, Dongyu</creatorcontrib><creatorcontrib>Chen, Mai</creatorcontrib><title>3D Multi-Object Tracking Based on Dual-Tracker and D-S Evidence Theory</title><title>IEEE transactions on intelligent vehicles</title><addtitle>TIV</addtitle><description>Most of the current self-driving cars are equipped with a variety of sensors (such as lidar, camera), and it has become a trend to integrate multi-sensor information to achieve 3D multi-object tracking (MOT). In addition, unlike the perfect experimental conditions on public datasets, cars on real roads may experience a 3D sensor failure due to weather, mutual interference, etc. In this paper, we propose a dual-tracker-based 3D MOT system, which fuses 2D and 3D information to track objects and can still ensure good tracking accuracy when the 3D sensor fails in a single frame or multiple consecutive frames. Among them, two internal trackers complete the data association of 3D and 2D information, respectively. However, the outputs of the two internal trackers may conflict. First, we calculate the degree of association of potential matched pairs from the features extracted by each sensor, and then normalize it into the mass function of D-S evidence theory. Finally, we combine the evidence to obtain the final matched pairs of detection instances and track estimates. We do extensive experiments on the KITTI MOT dataset and simulate sensor failure scenarios by ignoring the output of the 3D detector. Using the latest evaluation measure for comparison, the results show that our method outperforms other advanced open-source 3D MOTs in a variety of scenarios.</description><subject>3D multi-object tracking</subject><subject>Automobiles</subject><subject>Autonomous cars</subject><subject>D-S evidence theory</subject><subject>Datasets</subject><subject>Detectors</subject><subject>Evidence theory</subject><subject>Image edge detection</subject><subject>multi-sensor fusion</subject><subject>Multiple target tracking</subject><subject>Sensors</subject><subject>Three-dimensional displays</subject><subject>Trajectory</subject><issn>2379-8858</issn><issn>2379-8904</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNo9kM1PwzAMxSMEEhPsjsQlEueMxEna5Aj7gElDO1C4RlnqQsdoR9oi7b-nYwNfbFnvPcs_Qq4EHwnB7W02fx0BBxhJEIngcEIGIFPLjOXq9G822pyTYdOsOeciMWC4HZCZnNCnbtOWbLlaY2hpFn34KKs3eu8bzGld0UnnN-x3jZH6KqcT9kyn32WOVUCavWMdd5fkrPCbBofHfkFeZtNs_MgWy4f5-G7BAoBomRAGV1JqKUMR0KcC9qUCl1YVPGgVFIpQqJBoEKvUpyl4aUBqlRegTSEvyM0hdxvrrw6b1q3rLlb9Sdf_0-dZbVSv4gdViHXTRCzcNpafPu6c4G4PzPXA3B6YOwLrLdcHS4mI_3JrQSdSyB_8qGMk</recordid><startdate>20230301</startdate><enddate>20230301</enddate><creator>Ma, Yuanzhi</creator><creator>Zhang, Jindong</creator><creator>Qin, Guihe</creator><creator>Jin, Jingyi</creator><creator>Zhang, Kunpeng</creator><creator>Pan, Dongyu</creator><creator>Chen, Mai</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SP</scope><scope>8FD</scope><scope>L7M</scope><orcidid>https://orcid.org/0000-0002-5660-1281</orcidid><orcidid>https://orcid.org/0000-0002-2472-4324</orcidid></search><sort><creationdate>20230301</creationdate><title>3D Multi-Object Tracking Based on Dual-Tracker and D-S Evidence Theory</title><author>Ma, Yuanzhi ; Zhang, Jindong ; Qin, Guihe ; Jin, Jingyi ; Zhang, Kunpeng ; Pan, Dongyu ; Chen, Mai</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c221t-118eb33533cfcea71222224c0394f0c54c4e1cf4c6521b7a772a382354df258f3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>3D multi-object tracking</topic><topic>Automobiles</topic><topic>Autonomous cars</topic><topic>D-S evidence theory</topic><topic>Datasets</topic><topic>Detectors</topic><topic>Evidence theory</topic><topic>Image edge detection</topic><topic>multi-sensor fusion</topic><topic>Multiple target tracking</topic><topic>Sensors</topic><topic>Three-dimensional displays</topic><topic>Trajectory</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Ma, Yuanzhi</creatorcontrib><creatorcontrib>Zhang, Jindong</creatorcontrib><creatorcontrib>Qin, Guihe</creatorcontrib><creatorcontrib>Jin, Jingyi</creatorcontrib><creatorcontrib>Zhang, Kunpeng</creatorcontrib><creatorcontrib>Pan, Dongyu</creatorcontrib><creatorcontrib>Chen, Mai</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Technology Research Database</collection><collection>Advanced Technologies Database with Aerospace</collection><jtitle>IEEE transactions on intelligent vehicles</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Ma, Yuanzhi</au><au>Zhang, Jindong</au><au>Qin, Guihe</au><au>Jin, Jingyi</au><au>Zhang, Kunpeng</au><au>Pan, Dongyu</au><au>Chen, Mai</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>3D Multi-Object Tracking Based on Dual-Tracker and D-S Evidence Theory</atitle><jtitle>IEEE transactions on intelligent vehicles</jtitle><stitle>TIV</stitle><date>2023-03-01</date><risdate>2023</risdate><volume>8</volume><issue>3</issue><spage>2426</spage><epage>2436</epage><pages>2426-2436</pages><issn>2379-8858</issn><eissn>2379-8904</eissn><coden>ITIVBL</coden><abstract>Most of the current self-driving cars are equipped with a variety of sensors (such as lidar, camera), and it has become a trend to integrate multi-sensor information to achieve 3D multi-object tracking (MOT). In addition, unlike the perfect experimental conditions on public datasets, cars on real roads may experience a 3D sensor failure due to weather, mutual interference, etc. In this paper, we propose a dual-tracker-based 3D MOT system, which fuses 2D and 3D information to track objects and can still ensure good tracking accuracy when the 3D sensor fails in a single frame or multiple consecutive frames. Among them, two internal trackers complete the data association of 3D and 2D information, respectively. However, the outputs of the two internal trackers may conflict. First, we calculate the degree of association of potential matched pairs from the features extracted by each sensor, and then normalize it into the mass function of D-S evidence theory. Finally, we combine the evidence to obtain the final matched pairs of detection instances and track estimates. We do extensive experiments on the KITTI MOT dataset and simulate sensor failure scenarios by ignoring the output of the 3D detector. Using the latest evaluation measure for comparison, the results show that our method outperforms other advanced open-source 3D MOTs in a variety of scenarios.</abstract><cop>Piscataway</cop><pub>IEEE</pub><doi>10.1109/TIV.2022.3216102</doi><tpages>11</tpages><orcidid>https://orcid.org/0000-0002-5660-1281</orcidid><orcidid>https://orcid.org/0000-0002-2472-4324</orcidid></addata></record>
fulltext fulltext_linktorsrc
identifier ISSN: 2379-8858
ispartof IEEE transactions on intelligent vehicles, 2023-03, Vol.8 (3), p.2426-2436
issn 2379-8858
2379-8904
language eng
recordid cdi_ieee_primary_9925631
source IEEE Electronic Library (IEL)
subjects 3D multi-object tracking
Automobiles
Autonomous cars
D-S evidence theory
Datasets
Detectors
Evidence theory
Image edge detection
multi-sensor fusion
Multiple target tracking
Sensors
Three-dimensional displays
Trajectory
title 3D Multi-Object Tracking Based on Dual-Tracker and D-S Evidence Theory
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-25T00%3A04%3A10IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=3D%20Multi-Object%20Tracking%20Based%20on%20Dual-Tracker%20and%20D-S%20Evidence%20Theory&rft.jtitle=IEEE%20transactions%20on%20intelligent%20vehicles&rft.au=Ma,%20Yuanzhi&rft.date=2023-03-01&rft.volume=8&rft.issue=3&rft.spage=2426&rft.epage=2436&rft.pages=2426-2436&rft.issn=2379-8858&rft.eissn=2379-8904&rft.coden=ITIVBL&rft_id=info:doi/10.1109/TIV.2022.3216102&rft_dat=%3Cproquest_RIE%3E2807129584%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2807129584&rft_id=info:pmid/&rft_ieee_id=9925631&rfr_iscdi=true