3D Object Proposals Using Stereo Imagery for Accurate Object Class Detection
The goal of this paper is to perform 3D object detection in the context of autonomous driving. Our method aims at generating a set of high-quality 3D object proposals by exploiting stereo imagery. We formulate the problem as minimizing an energy function that encodes object size priors, placement of...
Gespeichert in:
Veröffentlicht in: | IEEE transactions on pattern analysis and machine intelligence 2018-05, Vol.40 (5), p.1259-1272 |
---|---|
Hauptverfasser: | , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 1272 |
---|---|
container_issue | 5 |
container_start_page | 1259 |
container_title | IEEE transactions on pattern analysis and machine intelligence |
container_volume | 40 |
creator | Chen, Xiaozhi Kundu, Kaustav Zhu, Yukun Ma, Huimin Fidler, Sanja Urtasun, Raquel |
description | The goal of this paper is to perform 3D object detection in the context of autonomous driving. Our method aims at generating a set of high-quality 3D object proposals by exploiting stereo imagery. We formulate the problem as minimizing an energy function that encodes object size priors, placement of objects on the ground plane as well as several depth informed features that reason about free space, point cloud densities and distance to the ground. We then exploit a CNN on top of these proposals to perform object detection. In particular, we employ a convolutional neural net (CNN) that exploits context and depth information to jointly regress to 3D bounding box coordinates and object pose. Our experiments show significant performance gains over existing RGB and RGB-D object proposal methods on the challenging KITTI benchmark. When combined with the CNN, our approach outperforms all existing results in object detection and orientation estimation tasks for all three KITTI object classes. Furthermore, we experiment also with the setting where LIDAR information is available, and show that using both LIDAR and stereo leads to the best result. |
doi_str_mv | 10.1109/TPAMI.2017.2706685 |
format | Article |
fullrecord | <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_crossref_primary_10_1109_TPAMI_2017_2706685</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>7932113</ieee_id><sourcerecordid>1902482072</sourcerecordid><originalsourceid>FETCH-LOGICAL-c466t-7ec82c2e31b2e5fc8bd3e52c03529dfea93cd2176dac94a104119111170821d23</originalsourceid><addsrcrecordid>eNpdkE1LAzEQhoMoWqt_QEEWvHjZmpnsR3Is9atQacF6DtnsrGzZNjXZPfjv3drqwVyGMM87MzyMXQEfAXB1v1yMX6cj5JCPMOdZJtMjNgAlVCxSoY7ZgEOGsZQoz9h5CCvOIUm5OGVnKNMEQGUDNhMP0bxYkW2jhXdbF0wTovdQbz6it5Y8uWi6Nh_kv6LK-WhsbedNS7-RSWNCiB6o7T-121ywk6rP0-WhDtn70-Ny8hLP5s_TyXgW2yTL2jgnK9EiCSiQ0srKohSUouUiRVVWZJSwJUKelcaqxADf3Qr9y7lEKFEM2d1-7ta7z45Cq9d1sNQ0ZkOuCxoUx0Qiz3fo7T905Tq_6a_T_YYkBc5V3lO4p6x3IXiq9NbXa-O_NHC9c61_XOuda31w3YduDqO7Yk3lX-RXbg9c74GaiP7auRIIIMQ30QSAjQ</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2174510097</pqid></control><display><type>article</type><title>3D Object Proposals Using Stereo Imagery for Accurate Object Class Detection</title><source>IEEE Electronic Library (IEL)</source><creator>Chen, Xiaozhi ; Kundu, Kaustav ; Zhu, Yukun ; Ma, Huimin ; Fidler, Sanja ; Urtasun, Raquel</creator><creatorcontrib>Chen, Xiaozhi ; Kundu, Kaustav ; Zhu, Yukun ; Ma, Huimin ; Fidler, Sanja ; Urtasun, Raquel</creatorcontrib><description>The goal of this paper is to perform 3D object detection in the context of autonomous driving. Our method aims at generating a set of high-quality 3D object proposals by exploiting stereo imagery. We formulate the problem as minimizing an energy function that encodes object size priors, placement of objects on the ground plane as well as several depth informed features that reason about free space, point cloud densities and distance to the ground. We then exploit a CNN on top of these proposals to perform object detection. In particular, we employ a convolutional neural net (CNN) that exploits context and depth information to jointly regress to 3D bounding box coordinates and object pose. Our experiments show significant performance gains over existing RGB and RGB-D object proposal methods on the challenging KITTI benchmark. When combined with the CNN, our approach outperforms all existing results in object detection and orientation estimation tasks for all three KITTI object classes. Furthermore, we experiment also with the setting where LIDAR information is available, and show that using both LIDAR and stereo leads to the best result.</description><identifier>ISSN: 0162-8828</identifier><identifier>EISSN: 1939-3539</identifier><identifier>EISSN: 2160-9292</identifier><identifier>DOI: 10.1109/TPAMI.2017.2706685</identifier><identifier>PMID: 28541196</identifier><identifier>CODEN: ITPIDJ</identifier><language>eng</language><publisher>United States: IEEE</publisher><subject>3D object detection ; autonomous driving ; Context ; convolutional neural networks ; Detectors ; Ground plane ; Image detection ; Image quality ; Laser radar ; LIDAR ; Object detection ; Object proposals ; Object recognition ; Proposals ; Solid modeling ; stereo ; Three-dimensional displays</subject><ispartof>IEEE transactions on pattern analysis and machine intelligence, 2018-05, Vol.40 (5), p.1259-1272</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2018</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c466t-7ec82c2e31b2e5fc8bd3e52c03529dfea93cd2176dac94a104119111170821d23</citedby><cites>FETCH-LOGICAL-c466t-7ec82c2e31b2e5fc8bd3e52c03529dfea93cd2176dac94a104119111170821d23</cites><orcidid>0000-0003-2703-5302 ; 0000-0001-5383-5667</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/7932113$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,780,784,796,27924,27925,54758</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/7932113$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/28541196$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Chen, Xiaozhi</creatorcontrib><creatorcontrib>Kundu, Kaustav</creatorcontrib><creatorcontrib>Zhu, Yukun</creatorcontrib><creatorcontrib>Ma, Huimin</creatorcontrib><creatorcontrib>Fidler, Sanja</creatorcontrib><creatorcontrib>Urtasun, Raquel</creatorcontrib><title>3D Object Proposals Using Stereo Imagery for Accurate Object Class Detection</title><title>IEEE transactions on pattern analysis and machine intelligence</title><addtitle>TPAMI</addtitle><addtitle>IEEE Trans Pattern Anal Mach Intell</addtitle><description>The goal of this paper is to perform 3D object detection in the context of autonomous driving. Our method aims at generating a set of high-quality 3D object proposals by exploiting stereo imagery. We formulate the problem as minimizing an energy function that encodes object size priors, placement of objects on the ground plane as well as several depth informed features that reason about free space, point cloud densities and distance to the ground. We then exploit a CNN on top of these proposals to perform object detection. In particular, we employ a convolutional neural net (CNN) that exploits context and depth information to jointly regress to 3D bounding box coordinates and object pose. Our experiments show significant performance gains over existing RGB and RGB-D object proposal methods on the challenging KITTI benchmark. When combined with the CNN, our approach outperforms all existing results in object detection and orientation estimation tasks for all three KITTI object classes. Furthermore, we experiment also with the setting where LIDAR information is available, and show that using both LIDAR and stereo leads to the best result.</description><subject>3D object detection</subject><subject>autonomous driving</subject><subject>Context</subject><subject>convolutional neural networks</subject><subject>Detectors</subject><subject>Ground plane</subject><subject>Image detection</subject><subject>Image quality</subject><subject>Laser radar</subject><subject>LIDAR</subject><subject>Object detection</subject><subject>Object proposals</subject><subject>Object recognition</subject><subject>Proposals</subject><subject>Solid modeling</subject><subject>stereo</subject><subject>Three-dimensional displays</subject><issn>0162-8828</issn><issn>1939-3539</issn><issn>2160-9292</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2018</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNpdkE1LAzEQhoMoWqt_QEEWvHjZmpnsR3Is9atQacF6DtnsrGzZNjXZPfjv3drqwVyGMM87MzyMXQEfAXB1v1yMX6cj5JCPMOdZJtMjNgAlVCxSoY7ZgEOGsZQoz9h5CCvOIUm5OGVnKNMEQGUDNhMP0bxYkW2jhXdbF0wTovdQbz6it5Y8uWi6Nh_kv6LK-WhsbedNS7-RSWNCiB6o7T-121ywk6rP0-WhDtn70-Ny8hLP5s_TyXgW2yTL2jgnK9EiCSiQ0srKohSUouUiRVVWZJSwJUKelcaqxADf3Qr9y7lEKFEM2d1-7ta7z45Cq9d1sNQ0ZkOuCxoUx0Qiz3fo7T905Tq_6a_T_YYkBc5V3lO4p6x3IXiq9NbXa-O_NHC9c61_XOuda31w3YduDqO7Yk3lX-RXbg9c74GaiP7auRIIIMQ30QSAjQ</recordid><startdate>20180501</startdate><enddate>20180501</enddate><creator>Chen, Xiaozhi</creator><creator>Kundu, Kaustav</creator><creator>Zhu, Yukun</creator><creator>Ma, Huimin</creator><creator>Fidler, Sanja</creator><creator>Urtasun, Raquel</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>7X8</scope><orcidid>https://orcid.org/0000-0003-2703-5302</orcidid><orcidid>https://orcid.org/0000-0001-5383-5667</orcidid></search><sort><creationdate>20180501</creationdate><title>3D Object Proposals Using Stereo Imagery for Accurate Object Class Detection</title><author>Chen, Xiaozhi ; Kundu, Kaustav ; Zhu, Yukun ; Ma, Huimin ; Fidler, Sanja ; Urtasun, Raquel</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c466t-7ec82c2e31b2e5fc8bd3e52c03529dfea93cd2176dac94a104119111170821d23</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2018</creationdate><topic>3D object detection</topic><topic>autonomous driving</topic><topic>Context</topic><topic>convolutional neural networks</topic><topic>Detectors</topic><topic>Ground plane</topic><topic>Image detection</topic><topic>Image quality</topic><topic>Laser radar</topic><topic>LIDAR</topic><topic>Object detection</topic><topic>Object proposals</topic><topic>Object recognition</topic><topic>Proposals</topic><topic>Solid modeling</topic><topic>stereo</topic><topic>Three-dimensional displays</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Chen, Xiaozhi</creatorcontrib><creatorcontrib>Kundu, Kaustav</creatorcontrib><creatorcontrib>Zhu, Yukun</creatorcontrib><creatorcontrib>Ma, Huimin</creatorcontrib><creatorcontrib>Fidler, Sanja</creatorcontrib><creatorcontrib>Urtasun, Raquel</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics & Communications Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>MEDLINE - Academic</collection><jtitle>IEEE transactions on pattern analysis and machine intelligence</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Chen, Xiaozhi</au><au>Kundu, Kaustav</au><au>Zhu, Yukun</au><au>Ma, Huimin</au><au>Fidler, Sanja</au><au>Urtasun, Raquel</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>3D Object Proposals Using Stereo Imagery for Accurate Object Class Detection</atitle><jtitle>IEEE transactions on pattern analysis and machine intelligence</jtitle><stitle>TPAMI</stitle><addtitle>IEEE Trans Pattern Anal Mach Intell</addtitle><date>2018-05-01</date><risdate>2018</risdate><volume>40</volume><issue>5</issue><spage>1259</spage><epage>1272</epage><pages>1259-1272</pages><issn>0162-8828</issn><eissn>1939-3539</eissn><eissn>2160-9292</eissn><coden>ITPIDJ</coden><abstract>The goal of this paper is to perform 3D object detection in the context of autonomous driving. Our method aims at generating a set of high-quality 3D object proposals by exploiting stereo imagery. We formulate the problem as minimizing an energy function that encodes object size priors, placement of objects on the ground plane as well as several depth informed features that reason about free space, point cloud densities and distance to the ground. We then exploit a CNN on top of these proposals to perform object detection. In particular, we employ a convolutional neural net (CNN) that exploits context and depth information to jointly regress to 3D bounding box coordinates and object pose. Our experiments show significant performance gains over existing RGB and RGB-D object proposal methods on the challenging KITTI benchmark. When combined with the CNN, our approach outperforms all existing results in object detection and orientation estimation tasks for all three KITTI object classes. Furthermore, we experiment also with the setting where LIDAR information is available, and show that using both LIDAR and stereo leads to the best result.</abstract><cop>United States</cop><pub>IEEE</pub><pmid>28541196</pmid><doi>10.1109/TPAMI.2017.2706685</doi><tpages>14</tpages><orcidid>https://orcid.org/0000-0003-2703-5302</orcidid><orcidid>https://orcid.org/0000-0001-5383-5667</orcidid></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | ISSN: 0162-8828 |
ispartof | IEEE transactions on pattern analysis and machine intelligence, 2018-05, Vol.40 (5), p.1259-1272 |
issn | 0162-8828 1939-3539 2160-9292 |
language | eng |
recordid | cdi_crossref_primary_10_1109_TPAMI_2017_2706685 |
source | IEEE Electronic Library (IEL) |
subjects | 3D object detection autonomous driving Context convolutional neural networks Detectors Ground plane Image detection Image quality Laser radar LIDAR Object detection Object proposals Object recognition Proposals Solid modeling stereo Three-dimensional displays |
title | 3D Object Proposals Using Stereo Imagery for Accurate Object Class Detection |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-27T17%3A25%3A21IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=3D%20Object%20Proposals%20Using%20Stereo%20Imagery%20for%20Accurate%20Object%20Class%20Detection&rft.jtitle=IEEE%20transactions%20on%20pattern%20analysis%20and%20machine%20intelligence&rft.au=Chen,%20Xiaozhi&rft.date=2018-05-01&rft.volume=40&rft.issue=5&rft.spage=1259&rft.epage=1272&rft.pages=1259-1272&rft.issn=0162-8828&rft.eissn=1939-3539&rft.coden=ITPIDJ&rft_id=info:doi/10.1109/TPAMI.2017.2706685&rft_dat=%3Cproquest_RIE%3E1902482072%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2174510097&rft_id=info:pmid/28541196&rft_ieee_id=7932113&rfr_iscdi=true |