Multi-Task Vehicle Detection With Region-of-Interest Voting

Vehicle detection is a challenging problem in autonomous driving systems, due to its large structural and appearance variations. In this paper, we propose a novel vehicle detection scheme based on multi-task deep convolutional neural networks (CNNs) and region-of-interest (RoI) voting. In the design...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on image processing 2018-01, Vol.27 (1), p.432-441
Hauptverfasser: Chu, Wenqing, Liu, Yao, Shen, Chen, Cai, Deng, Hua, Xian-Sheng
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 441
container_issue 1
container_start_page 432
container_title IEEE transactions on image processing
container_volume 27
creator Chu, Wenqing
Liu, Yao
Shen, Chen
Cai, Deng
Hua, Xian-Sheng
description Vehicle detection is a challenging problem in autonomous driving systems, due to its large structural and appearance variations. In this paper, we propose a novel vehicle detection scheme based on multi-task deep convolutional neural networks (CNNs) and region-of-interest (RoI) voting. In the design of CNN architecture, we enrich the supervised information with subcategory, region overlap, bounding-box regression, and category of each training RoI as a multi-task learning framework. This design allows the CNN model to share visual knowledge among different vehicle attributes simultaneously, and thus, detection robustness can be effectively improved. In addition, most existing methods consider each RoI independently, ignoring the clues from its neighboring RoIs. In our approach, we utilize the CNN model to predict the offset direction of each RoI boundary toward the corresponding ground truth. Then, each RoI can vote those suitable adjacent bounding boxes, which are consistent with this additional information. The voting results are combined with the score of each RoI itself to find a more accurate location from a large number of candidates. Experimental results on the real-world computer vision benchmarks KITTI and the PASCAL2007 vehicle data set show that our approach achieves superior performance in vehicle detection compared with other existing published works.
doi_str_mv 10.1109/TIP.2017.2762591
format Article
fullrecord <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_proquest_miscellaneous_1951569064</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>8066331</ieee_id><sourcerecordid>1951569064</sourcerecordid><originalsourceid>FETCH-LOGICAL-c319t-f983830664bb5eb151d0fb9501808e3857b3905ac36f3865a469961a4c8c28e83</originalsourceid><addsrcrecordid>eNo9kM1Lw0AQxRdRrFbvgiA5etk6k_3ILp6kfhUqitR6DEk6aaNpU7Obg_-9W1p7mgfz3hvmx9gFwgAR7M1k9DaIAZNBnOhYWTxgJ2glcgAZHwYNKuEJSttjp859AaBUqI9ZL7YQG7TJCbt96Wpf8UnmvqMpLaqipuiePBW-albRZ-UX0TvNg-ZNyUcrTy05H00bX63mZ-yozGpH57vZZx-PD5PhMx-_Po2Gd2NeCLSel9YII0BrmeeKclQ4gzK3CtCAIWFUkgsLKiuELoXRKpPaWo2ZLEwRGzKiz663veu2-enC_XRZuYLqOltR07kUrUKlLWgZrLC1Fm3jXEtlum6rZdb-pgjpBlkakKUbZOkOWYhc7dq7fEmzfeCfUTBcbg0VEe3XJjwkBIo_av5tOw</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>1951569064</pqid></control><display><type>article</type><title>Multi-Task Vehicle Detection With Region-of-Interest Voting</title><source>IEEE Electronic Library (IEL)</source><creator>Chu, Wenqing ; Liu, Yao ; Shen, Chen ; Cai, Deng ; Hua, Xian-Sheng</creator><creatorcontrib>Chu, Wenqing ; Liu, Yao ; Shen, Chen ; Cai, Deng ; Hua, Xian-Sheng</creatorcontrib><description>Vehicle detection is a challenging problem in autonomous driving systems, due to its large structural and appearance variations. In this paper, we propose a novel vehicle detection scheme based on multi-task deep convolutional neural networks (CNNs) and region-of-interest (RoI) voting. In the design of CNN architecture, we enrich the supervised information with subcategory, region overlap, bounding-box regression, and category of each training RoI as a multi-task learning framework. This design allows the CNN model to share visual knowledge among different vehicle attributes simultaneously, and thus, detection robustness can be effectively improved. In addition, most existing methods consider each RoI independently, ignoring the clues from its neighboring RoIs. In our approach, we utilize the CNN model to predict the offset direction of each RoI boundary toward the corresponding ground truth. Then, each RoI can vote those suitable adjacent bounding boxes, which are consistent with this additional information. The voting results are combined with the score of each RoI itself to find a more accurate location from a large number of candidates. Experimental results on the real-world computer vision benchmarks KITTI and the PASCAL2007 vehicle data set show that our approach achieves superior performance in vehicle detection compared with other existing published works.</description><identifier>ISSN: 1057-7149</identifier><identifier>EISSN: 1941-0042</identifier><identifier>DOI: 10.1109/TIP.2017.2762591</identifier><identifier>PMID: 29028197</identifier><identifier>CODEN: IIPRE4</identifier><language>eng</language><publisher>United States: IEEE</publisher><subject>CNN ; Feature extraction ; multi-task ; Object detection ; Proposals ; region-of-interest ; Solid modeling ; Three-dimensional displays ; Vehicle detection</subject><ispartof>IEEE transactions on image processing, 2018-01, Vol.27 (1), p.432-441</ispartof><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c319t-f983830664bb5eb151d0fb9501808e3857b3905ac36f3865a469961a4c8c28e83</citedby><cites>FETCH-LOGICAL-c319t-f983830664bb5eb151d0fb9501808e3857b3905ac36f3865a469961a4c8c28e83</cites><orcidid>0000-0003-0816-7975</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/8066331$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,780,784,796,27924,27925,54758</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/8066331$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/29028197$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Chu, Wenqing</creatorcontrib><creatorcontrib>Liu, Yao</creatorcontrib><creatorcontrib>Shen, Chen</creatorcontrib><creatorcontrib>Cai, Deng</creatorcontrib><creatorcontrib>Hua, Xian-Sheng</creatorcontrib><title>Multi-Task Vehicle Detection With Region-of-Interest Voting</title><title>IEEE transactions on image processing</title><addtitle>TIP</addtitle><addtitle>IEEE Trans Image Process</addtitle><description>Vehicle detection is a challenging problem in autonomous driving systems, due to its large structural and appearance variations. In this paper, we propose a novel vehicle detection scheme based on multi-task deep convolutional neural networks (CNNs) and region-of-interest (RoI) voting. In the design of CNN architecture, we enrich the supervised information with subcategory, region overlap, bounding-box regression, and category of each training RoI as a multi-task learning framework. This design allows the CNN model to share visual knowledge among different vehicle attributes simultaneously, and thus, detection robustness can be effectively improved. In addition, most existing methods consider each RoI independently, ignoring the clues from its neighboring RoIs. In our approach, we utilize the CNN model to predict the offset direction of each RoI boundary toward the corresponding ground truth. Then, each RoI can vote those suitable adjacent bounding boxes, which are consistent with this additional information. The voting results are combined with the score of each RoI itself to find a more accurate location from a large number of candidates. Experimental results on the real-world computer vision benchmarks KITTI and the PASCAL2007 vehicle data set show that our approach achieves superior performance in vehicle detection compared with other existing published works.</description><subject>CNN</subject><subject>Feature extraction</subject><subject>multi-task</subject><subject>Object detection</subject><subject>Proposals</subject><subject>region-of-interest</subject><subject>Solid modeling</subject><subject>Three-dimensional displays</subject><subject>Vehicle detection</subject><issn>1057-7149</issn><issn>1941-0042</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2018</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNo9kM1Lw0AQxRdRrFbvgiA5etk6k_3ILp6kfhUqitR6DEk6aaNpU7Obg_-9W1p7mgfz3hvmx9gFwgAR7M1k9DaIAZNBnOhYWTxgJ2glcgAZHwYNKuEJSttjp859AaBUqI9ZL7YQG7TJCbt96Wpf8UnmvqMpLaqipuiePBW-albRZ-UX0TvNg-ZNyUcrTy05H00bX63mZ-yozGpH57vZZx-PD5PhMx-_Po2Gd2NeCLSel9YII0BrmeeKclQ4gzK3CtCAIWFUkgsLKiuELoXRKpPaWo2ZLEwRGzKiz663veu2-enC_XRZuYLqOltR07kUrUKlLWgZrLC1Fm3jXEtlum6rZdb-pgjpBlkakKUbZOkOWYhc7dq7fEmzfeCfUTBcbg0VEe3XJjwkBIo_av5tOw</recordid><startdate>201801</startdate><enddate>201801</enddate><creator>Chu, Wenqing</creator><creator>Liu, Yao</creator><creator>Shen, Chen</creator><creator>Cai, Deng</creator><creator>Hua, Xian-Sheng</creator><general>IEEE</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7X8</scope><orcidid>https://orcid.org/0000-0003-0816-7975</orcidid></search><sort><creationdate>201801</creationdate><title>Multi-Task Vehicle Detection With Region-of-Interest Voting</title><author>Chu, Wenqing ; Liu, Yao ; Shen, Chen ; Cai, Deng ; Hua, Xian-Sheng</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c319t-f983830664bb5eb151d0fb9501808e3857b3905ac36f3865a469961a4c8c28e83</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2018</creationdate><topic>CNN</topic><topic>Feature extraction</topic><topic>multi-task</topic><topic>Object detection</topic><topic>Proposals</topic><topic>region-of-interest</topic><topic>Solid modeling</topic><topic>Three-dimensional displays</topic><topic>Vehicle detection</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Chu, Wenqing</creatorcontrib><creatorcontrib>Liu, Yao</creatorcontrib><creatorcontrib>Shen, Chen</creatorcontrib><creatorcontrib>Cai, Deng</creatorcontrib><creatorcontrib>Hua, Xian-Sheng</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>MEDLINE - Academic</collection><jtitle>IEEE transactions on image processing</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Chu, Wenqing</au><au>Liu, Yao</au><au>Shen, Chen</au><au>Cai, Deng</au><au>Hua, Xian-Sheng</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Multi-Task Vehicle Detection With Region-of-Interest Voting</atitle><jtitle>IEEE transactions on image processing</jtitle><stitle>TIP</stitle><addtitle>IEEE Trans Image Process</addtitle><date>2018-01</date><risdate>2018</risdate><volume>27</volume><issue>1</issue><spage>432</spage><epage>441</epage><pages>432-441</pages><issn>1057-7149</issn><eissn>1941-0042</eissn><coden>IIPRE4</coden><abstract>Vehicle detection is a challenging problem in autonomous driving systems, due to its large structural and appearance variations. In this paper, we propose a novel vehicle detection scheme based on multi-task deep convolutional neural networks (CNNs) and region-of-interest (RoI) voting. In the design of CNN architecture, we enrich the supervised information with subcategory, region overlap, bounding-box regression, and category of each training RoI as a multi-task learning framework. This design allows the CNN model to share visual knowledge among different vehicle attributes simultaneously, and thus, detection robustness can be effectively improved. In addition, most existing methods consider each RoI independently, ignoring the clues from its neighboring RoIs. In our approach, we utilize the CNN model to predict the offset direction of each RoI boundary toward the corresponding ground truth. Then, each RoI can vote those suitable adjacent bounding boxes, which are consistent with this additional information. The voting results are combined with the score of each RoI itself to find a more accurate location from a large number of candidates. Experimental results on the real-world computer vision benchmarks KITTI and the PASCAL2007 vehicle data set show that our approach achieves superior performance in vehicle detection compared with other existing published works.</abstract><cop>United States</cop><pub>IEEE</pub><pmid>29028197</pmid><doi>10.1109/TIP.2017.2762591</doi><tpages>10</tpages><orcidid>https://orcid.org/0000-0003-0816-7975</orcidid></addata></record>
fulltext fulltext_linktorsrc
identifier ISSN: 1057-7149
ispartof IEEE transactions on image processing, 2018-01, Vol.27 (1), p.432-441
issn 1057-7149
1941-0042
language eng
recordid cdi_proquest_miscellaneous_1951569064
source IEEE Electronic Library (IEL)
subjects CNN
Feature extraction
multi-task
Object detection
Proposals
region-of-interest
Solid modeling
Three-dimensional displays
Vehicle detection
title Multi-Task Vehicle Detection With Region-of-Interest Voting
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-29T10%3A12%3A06IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Multi-Task%20Vehicle%20Detection%20With%20Region-of-Interest%20Voting&rft.jtitle=IEEE%20transactions%20on%20image%20processing&rft.au=Chu,%20Wenqing&rft.date=2018-01&rft.volume=27&rft.issue=1&rft.spage=432&rft.epage=441&rft.pages=432-441&rft.issn=1057-7149&rft.eissn=1941-0042&rft.coden=IIPRE4&rft_id=info:doi/10.1109/TIP.2017.2762591&rft_dat=%3Cproquest_RIE%3E1951569064%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=1951569064&rft_id=info:pmid/29028197&rft_ieee_id=8066331&rfr_iscdi=true