Vehicle tracking across nonoverlapping cameras using joint kinematic and appearance features
We describe a vehicle tracking algorithm using input from a network of nonoverlapping cameras. Our algorithm is based on a novel statistical formulation that uses joint kinematic and image appearance information to link local tracks of the same vehicles into global tracks with longer persistence. Th...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Tagungsbericht |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 3472 |
---|---|
container_issue | |
container_start_page | 3465 |
container_title | |
container_volume | |
creator | Matei, B. C. Sawhney, H. S. Samarasekera, S. |
description | We describe a vehicle tracking algorithm using input from a network of nonoverlapping cameras. Our algorithm is based on a novel statistical formulation that uses joint kinematic and image appearance information to link local tracks of the same vehicles into global tracks with longer persistence. The algorithm can handle significant spatial separation between the cameras and is robust to challenging tracking conditions such as high traffic density, or complex road infrastructure. In these cases, traditional tracking formulations based on MHT, or JPDA algorithms, may fail to produce track associations across cameras due to the weak predictive models employed. We make several new contributions in this paper. Firstly, we model kinematic constraints between any two local tracks using road networks and transit time distributions. The transit time distributions are calculated dynamically as convolutions of normalized transit time distributions that are learned and adapted separately for individual roads. Secondly, we present a complete statistical tracker formulation, which combines kinematic and appearance likelihoods within a multi-hypothesis framework. We have extensively evaluated the algorithm proposed using a network of ground-based cameras with narrow field of view. The tracking results obtained on a large ground-truthed dataset demonstrate the effectiveness of the algorithm proposed. |
doi_str_mv | 10.1109/CVPR.2011.5995575 |
format | Conference Proceeding |
fullrecord | <record><control><sourceid>ieee_6IE</sourceid><recordid>TN_cdi_ieee_primary_5995575</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>5995575</ieee_id><sourcerecordid>5995575</sourcerecordid><originalsourceid>FETCH-LOGICAL-i175t-d2a0aac519c8cf8f561c8737a6cabf8f6cf8b05e2060722c7dc02c6e6bdb37fa3</originalsourceid><addsrcrecordid>eNpFkN1KxDAUhCMquK77AOJNXqA1aTZJcynFP1hQRPdKWE5PTzVr_0i6gm9vFxecm-EbhrkYxi6lSKUU7rpYP7-kmZAy1c5pbfURO5dLba1QTrnjf1jaEzaTwqjEOOnO2CLGrZhkTO60nbH3NX16bIiPAfDLdx8cMPQx8q7v-m8KDQzDPkVoKUDku7inbe-7kU91amH0yKGr-FQkCNAh8Zpg3AWKF-y0hibS4uBz9nZ3-1o8JKun-8fiZpV4afWYVBkIANTSYY51XmsjMbfKgkEoJzZTWApNmTDCZhnaCkWGhkxZlcrWoObs6m_XE9FmCL6F8LM5PKN-AaIWWEQ</addsrcrecordid><sourcetype>Publisher</sourcetype><iscdi>true</iscdi><recordtype>conference_proceeding</recordtype></control><display><type>conference_proceeding</type><title>Vehicle tracking across nonoverlapping cameras using joint kinematic and appearance features</title><source>IEEE Electronic Library (IEL) Conference Proceedings</source><creator>Matei, B. C. ; Sawhney, H. S. ; Samarasekera, S.</creator><creatorcontrib>Matei, B. C. ; Sawhney, H. S. ; Samarasekera, S.</creatorcontrib><description>We describe a vehicle tracking algorithm using input from a network of nonoverlapping cameras. Our algorithm is based on a novel statistical formulation that uses joint kinematic and image appearance information to link local tracks of the same vehicles into global tracks with longer persistence. The algorithm can handle significant spatial separation between the cameras and is robust to challenging tracking conditions such as high traffic density, or complex road infrastructure. In these cases, traditional tracking formulations based on MHT, or JPDA algorithms, may fail to produce track associations across cameras due to the weak predictive models employed. We make several new contributions in this paper. Firstly, we model kinematic constraints between any two local tracks using road networks and transit time distributions. The transit time distributions are calculated dynamically as convolutions of normalized transit time distributions that are learned and adapted separately for individual roads. Secondly, we present a complete statistical tracker formulation, which combines kinematic and appearance likelihoods within a multi-hypothesis framework. We have extensively evaluated the algorithm proposed using a network of ground-based cameras with narrow field of view. The tracking results obtained on a large ground-truthed dataset demonstrate the effectiveness of the algorithm proposed.</description><identifier>ISSN: 1063-6919</identifier><identifier>ISBN: 1457703947</identifier><identifier>ISBN: 9781457703942</identifier><identifier>EISBN: 1457703939</identifier><identifier>EISBN: 1457703955</identifier><identifier>EISBN: 9781457703959</identifier><identifier>EISBN: 9781457703935</identifier><identifier>DOI: 10.1109/CVPR.2011.5995575</identifier><language>eng</language><publisher>IEEE</publisher><subject>Cameras ; Kinematics ; Radar tracking ; Roads ; Signal processing algorithms ; Target tracking ; Vehicles</subject><ispartof>CVPR 2011, 2011, p.3465-3472</ispartof><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/5995575$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>309,310,780,784,789,790,2058,27925,54920</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/5995575$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Matei, B. C.</creatorcontrib><creatorcontrib>Sawhney, H. S.</creatorcontrib><creatorcontrib>Samarasekera, S.</creatorcontrib><title>Vehicle tracking across nonoverlapping cameras using joint kinematic and appearance features</title><title>CVPR 2011</title><addtitle>CVPR</addtitle><description>We describe a vehicle tracking algorithm using input from a network of nonoverlapping cameras. Our algorithm is based on a novel statistical formulation that uses joint kinematic and image appearance information to link local tracks of the same vehicles into global tracks with longer persistence. The algorithm can handle significant spatial separation between the cameras and is robust to challenging tracking conditions such as high traffic density, or complex road infrastructure. In these cases, traditional tracking formulations based on MHT, or JPDA algorithms, may fail to produce track associations across cameras due to the weak predictive models employed. We make several new contributions in this paper. Firstly, we model kinematic constraints between any two local tracks using road networks and transit time distributions. The transit time distributions are calculated dynamically as convolutions of normalized transit time distributions that are learned and adapted separately for individual roads. Secondly, we present a complete statistical tracker formulation, which combines kinematic and appearance likelihoods within a multi-hypothesis framework. We have extensively evaluated the algorithm proposed using a network of ground-based cameras with narrow field of view. The tracking results obtained on a large ground-truthed dataset demonstrate the effectiveness of the algorithm proposed.</description><subject>Cameras</subject><subject>Kinematics</subject><subject>Radar tracking</subject><subject>Roads</subject><subject>Signal processing algorithms</subject><subject>Target tracking</subject><subject>Vehicles</subject><issn>1063-6919</issn><isbn>1457703947</isbn><isbn>9781457703942</isbn><isbn>1457703939</isbn><isbn>1457703955</isbn><isbn>9781457703959</isbn><isbn>9781457703935</isbn><fulltext>true</fulltext><rsrctype>conference_proceeding</rsrctype><creationdate>2011</creationdate><recordtype>conference_proceeding</recordtype><sourceid>6IE</sourceid><sourceid>RIE</sourceid><recordid>eNpFkN1KxDAUhCMquK77AOJNXqA1aTZJcynFP1hQRPdKWE5PTzVr_0i6gm9vFxecm-EbhrkYxi6lSKUU7rpYP7-kmZAy1c5pbfURO5dLba1QTrnjf1jaEzaTwqjEOOnO2CLGrZhkTO60nbH3NX16bIiPAfDLdx8cMPQx8q7v-m8KDQzDPkVoKUDku7inbe-7kU91amH0yKGr-FQkCNAh8Zpg3AWKF-y0hibS4uBz9nZ3-1o8JKun-8fiZpV4afWYVBkIANTSYY51XmsjMbfKgkEoJzZTWApNmTDCZhnaCkWGhkxZlcrWoObs6m_XE9FmCL6F8LM5PKN-AaIWWEQ</recordid><startdate>201106</startdate><enddate>201106</enddate><creator>Matei, B. C.</creator><creator>Sawhney, H. S.</creator><creator>Samarasekera, S.</creator><general>IEEE</general><scope>6IE</scope><scope>6IH</scope><scope>CBEJK</scope><scope>RIE</scope><scope>RIO</scope></search><sort><creationdate>201106</creationdate><title>Vehicle tracking across nonoverlapping cameras using joint kinematic and appearance features</title><author>Matei, B. C. ; Sawhney, H. S. ; Samarasekera, S.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-i175t-d2a0aac519c8cf8f561c8737a6cabf8f6cf8b05e2060722c7dc02c6e6bdb37fa3</frbrgroupid><rsrctype>conference_proceedings</rsrctype><prefilter>conference_proceedings</prefilter><language>eng</language><creationdate>2011</creationdate><topic>Cameras</topic><topic>Kinematics</topic><topic>Radar tracking</topic><topic>Roads</topic><topic>Signal processing algorithms</topic><topic>Target tracking</topic><topic>Vehicles</topic><toplevel>online_resources</toplevel><creatorcontrib>Matei, B. C.</creatorcontrib><creatorcontrib>Sawhney, H. S.</creatorcontrib><creatorcontrib>Samarasekera, S.</creatorcontrib><collection>IEEE Electronic Library (IEL) Conference Proceedings</collection><collection>IEEE Proceedings Order Plan (POP) 1998-present by volume</collection><collection>IEEE Xplore All Conference Proceedings</collection><collection>IEEE Electronic Library (IEL)</collection><collection>IEEE Proceedings Order Plans (POP) 1998-present</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Matei, B. C.</au><au>Sawhney, H. S.</au><au>Samarasekera, S.</au><format>book</format><genre>proceeding</genre><ristype>CONF</ristype><atitle>Vehicle tracking across nonoverlapping cameras using joint kinematic and appearance features</atitle><btitle>CVPR 2011</btitle><stitle>CVPR</stitle><date>2011-06</date><risdate>2011</risdate><spage>3465</spage><epage>3472</epage><pages>3465-3472</pages><issn>1063-6919</issn><isbn>1457703947</isbn><isbn>9781457703942</isbn><eisbn>1457703939</eisbn><eisbn>1457703955</eisbn><eisbn>9781457703959</eisbn><eisbn>9781457703935</eisbn><abstract>We describe a vehicle tracking algorithm using input from a network of nonoverlapping cameras. Our algorithm is based on a novel statistical formulation that uses joint kinematic and image appearance information to link local tracks of the same vehicles into global tracks with longer persistence. The algorithm can handle significant spatial separation between the cameras and is robust to challenging tracking conditions such as high traffic density, or complex road infrastructure. In these cases, traditional tracking formulations based on MHT, or JPDA algorithms, may fail to produce track associations across cameras due to the weak predictive models employed. We make several new contributions in this paper. Firstly, we model kinematic constraints between any two local tracks using road networks and transit time distributions. The transit time distributions are calculated dynamically as convolutions of normalized transit time distributions that are learned and adapted separately for individual roads. Secondly, we present a complete statistical tracker formulation, which combines kinematic and appearance likelihoods within a multi-hypothesis framework. We have extensively evaluated the algorithm proposed using a network of ground-based cameras with narrow field of view. The tracking results obtained on a large ground-truthed dataset demonstrate the effectiveness of the algorithm proposed.</abstract><pub>IEEE</pub><doi>10.1109/CVPR.2011.5995575</doi><tpages>8</tpages></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | ISSN: 1063-6919 |
ispartof | CVPR 2011, 2011, p.3465-3472 |
issn | 1063-6919 |
language | eng |
recordid | cdi_ieee_primary_5995575 |
source | IEEE Electronic Library (IEL) Conference Proceedings |
subjects | Cameras Kinematics Radar tracking Roads Signal processing algorithms Target tracking Vehicles |
title | Vehicle tracking across nonoverlapping cameras using joint kinematic and appearance features |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-27T11%3A31%3A22IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-ieee_6IE&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=proceeding&rft.atitle=Vehicle%20tracking%20across%20nonoverlapping%20cameras%20using%20joint%20kinematic%20and%20appearance%20features&rft.btitle=CVPR%202011&rft.au=Matei,%20B.%20C.&rft.date=2011-06&rft.spage=3465&rft.epage=3472&rft.pages=3465-3472&rft.issn=1063-6919&rft.isbn=1457703947&rft.isbn_list=9781457703942&rft_id=info:doi/10.1109/CVPR.2011.5995575&rft_dat=%3Cieee_6IE%3E5995575%3C/ieee_6IE%3E%3Curl%3E%3C/url%3E&rft.eisbn=1457703939&rft.eisbn_list=1457703955&rft.eisbn_list=9781457703959&rft.eisbn_list=9781457703935&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rft_ieee_id=5995575&rfr_iscdi=true |