LDLS: 3-D Object Segmentation Through Label Diffusion From 2-D Images

Object segmentation in three-dimensional (3-D) point clouds is a critical task for robots capable of 3-D perception. Despite the impressive performance of deep learning-based approaches on object segmentation in 2-D images, deep learning has not been applied nearly as successfully for 3-D point clou...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE robotics and automation letters 2019-07, Vol.4 (3), p.2902-2909
Hauptverfasser: Wang, Brian H., Chao, Wei-Lun, Wang, Yan, Hariharan, Bharath, Weinberger, Kilian Q., Campbell, Mark
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 2909
container_issue 3
container_start_page 2902
container_title IEEE robotics and automation letters
container_volume 4
creator Wang, Brian H.
Chao, Wei-Lun
Wang, Yan
Hariharan, Bharath
Weinberger, Kilian Q.
Campbell, Mark
description Object segmentation in three-dimensional (3-D) point clouds is a critical task for robots capable of 3-D perception. Despite the impressive performance of deep learning-based approaches on object segmentation in 2-D images, deep learning has not been applied nearly as successfully for 3-D point cloud segmentation. Deep networks generally require large amounts of labeled training data, which are readily available for 2-D images but are difficult to produce for 3-D point clouds. In this letter, we present Label Diffusion Lidar Segmentation (LDLS), a novel approach for 3-D point cloud segmentation, which leverages 2-D segmentation of an RGB image from an aligned camera to avoid the need for training on annotated 3-D data. We obtain 2-D segmentation predictions by applying Mask-RCNN to the RGB image, and then link this image to a 3-D lidar point cloud by building a graph of connections among 3-D points and 2-D pixels. This graph then directs a semi-supervised label diffusion process, where the 2-D pixels act as source nodes that diffuse object label information through the 3-D point cloud, resulting in a complete 3-D point cloud segmentation. We conduct empirical studies on the KITTI benchmark dataset and on a mobile robot, demonstrating wide applicability and superior performance of LDLS compared with the previous state of the art in 3-D point cloud segmentation, without any need for either 3-D training data or fine tuning of the 2-D image segmentation model.
doi_str_mv 10.1109/LRA.2019.2922582
format Article
fullrecord <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_ieee_primary_8735751</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>8735751</ieee_id><sourcerecordid>2296105713</sourcerecordid><originalsourceid>FETCH-LOGICAL-c291t-c702a325a32d658751c86a15ce4ba6211cacb226604b2b1999e1ba1e2e1cfd1f3</originalsourceid><addsrcrecordid>eNpNkM1rAjEQxUNpoWK9F3pZ6HltZmKSTW_iRyssCNWeQzbO6oq6Ntk99L_viqX0MMwwvDdv-DH2CHwIwM1L_jEeIgczRIMoM7xhPRRap0IrdftvvmeDGPecc5CohZE9Nsun-eo1Eek0WRZ78k2you2RTo1rqvqUrHehbre7JHcFHZJpVZZtvOznoT4m2JkWR7el-MDuSneINPjtffY5n60n72m-fFtMxnnq0UCTes3RCZRdbZTMtASfKQfS06hwCgG88wWiUnxUYAHGGILCASGBLzdQij57vt49h_qrpdjYfd2GUxdpEY0CLjWITsWvKh_qGAOV9hyqowvfFri98LIdL3vhZX95dZanq6Uioj95poXsnhQ_Xv5jBA</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2296105713</pqid></control><display><type>article</type><title>LDLS: 3-D Object Segmentation Through Label Diffusion From 2-D Images</title><source>IEEE Electronic Library (IEL)</source><creator>Wang, Brian H. ; Chao, Wei-Lun ; Wang, Yan ; Hariharan, Bharath ; Weinberger, Kilian Q. ; Campbell, Mark</creator><creatorcontrib>Wang, Brian H. ; Chao, Wei-Lun ; Wang, Yan ; Hariharan, Bharath ; Weinberger, Kilian Q. ; Campbell, Mark</creatorcontrib><description>Object segmentation in three-dimensional (3-D) point clouds is a critical task for robots capable of 3-D perception. Despite the impressive performance of deep learning-based approaches on object segmentation in 2-D images, deep learning has not been applied nearly as successfully for 3-D point cloud segmentation. Deep networks generally require large amounts of labeled training data, which are readily available for 2-D images but are difficult to produce for 3-D point clouds. In this letter, we present Label Diffusion Lidar Segmentation (LDLS), a novel approach for 3-D point cloud segmentation, which leverages 2-D segmentation of an RGB image from an aligned camera to avoid the need for training on annotated 3-D data. We obtain 2-D segmentation predictions by applying Mask-RCNN to the RGB image, and then link this image to a 3-D lidar point cloud by building a graph of connections among 3-D points and 2-D pixels. This graph then directs a semi-supervised label diffusion process, where the 2-D pixels act as source nodes that diffuse object label information through the 3-D point cloud, resulting in a complete 3-D point cloud segmentation. We conduct empirical studies on the KITTI benchmark dataset and on a mobile robot, demonstrating wide applicability and superior performance of LDLS compared with the previous state of the art in 3-D point cloud segmentation, without any need for either 3-D training data or fine tuning of the 2-D image segmentation model.</description><identifier>ISSN: 2377-3766</identifier><identifier>EISSN: 2377-3766</identifier><identifier>DOI: 10.1109/LRA.2019.2922582</identifier><identifier>CODEN: IRALC6</identifier><language>eng</language><publisher>Piscataway: IEEE</publisher><subject>Cameras ; Deep learning ; Diffusion ; Image segmentation ; Laser radar ; Lidar ; Machine learning ; Object detection ; Pixels ; RGB-D perception ; Robots ; segmentation and categorization ; Sensors ; Space perception ; Task analysis ; Three dimensional models ; Three-dimensional displays ; Training ; Two dimensional displays ; Two dimensional models</subject><ispartof>IEEE robotics and automation letters, 2019-07, Vol.4 (3), p.2902-2909</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2019</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c291t-c702a325a32d658751c86a15ce4ba6211cacb226604b2b1999e1ba1e2e1cfd1f3</citedby><cites>FETCH-LOGICAL-c291t-c702a325a32d658751c86a15ce4ba6211cacb226604b2b1999e1ba1e2e1cfd1f3</cites><orcidid>0000-0002-6373-3864 ; 0000-0002-7398-5854 ; 0000-0003-0775-4297</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/8735751$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,776,780,792,27901,27902,54733</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/8735751$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Wang, Brian H.</creatorcontrib><creatorcontrib>Chao, Wei-Lun</creatorcontrib><creatorcontrib>Wang, Yan</creatorcontrib><creatorcontrib>Hariharan, Bharath</creatorcontrib><creatorcontrib>Weinberger, Kilian Q.</creatorcontrib><creatorcontrib>Campbell, Mark</creatorcontrib><title>LDLS: 3-D Object Segmentation Through Label Diffusion From 2-D Images</title><title>IEEE robotics and automation letters</title><addtitle>LRA</addtitle><description>Object segmentation in three-dimensional (3-D) point clouds is a critical task for robots capable of 3-D perception. Despite the impressive performance of deep learning-based approaches on object segmentation in 2-D images, deep learning has not been applied nearly as successfully for 3-D point cloud segmentation. Deep networks generally require large amounts of labeled training data, which are readily available for 2-D images but are difficult to produce for 3-D point clouds. In this letter, we present Label Diffusion Lidar Segmentation (LDLS), a novel approach for 3-D point cloud segmentation, which leverages 2-D segmentation of an RGB image from an aligned camera to avoid the need for training on annotated 3-D data. We obtain 2-D segmentation predictions by applying Mask-RCNN to the RGB image, and then link this image to a 3-D lidar point cloud by building a graph of connections among 3-D points and 2-D pixels. This graph then directs a semi-supervised label diffusion process, where the 2-D pixels act as source nodes that diffuse object label information through the 3-D point cloud, resulting in a complete 3-D point cloud segmentation. We conduct empirical studies on the KITTI benchmark dataset and on a mobile robot, demonstrating wide applicability and superior performance of LDLS compared with the previous state of the art in 3-D point cloud segmentation, without any need for either 3-D training data or fine tuning of the 2-D image segmentation model.</description><subject>Cameras</subject><subject>Deep learning</subject><subject>Diffusion</subject><subject>Image segmentation</subject><subject>Laser radar</subject><subject>Lidar</subject><subject>Machine learning</subject><subject>Object detection</subject><subject>Pixels</subject><subject>RGB-D perception</subject><subject>Robots</subject><subject>segmentation and categorization</subject><subject>Sensors</subject><subject>Space perception</subject><subject>Task analysis</subject><subject>Three dimensional models</subject><subject>Three-dimensional displays</subject><subject>Training</subject><subject>Two dimensional displays</subject><subject>Two dimensional models</subject><issn>2377-3766</issn><issn>2377-3766</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2019</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNpNkM1rAjEQxUNpoWK9F3pZ6HltZmKSTW_iRyssCNWeQzbO6oq6Ntk99L_viqX0MMwwvDdv-DH2CHwIwM1L_jEeIgczRIMoM7xhPRRap0IrdftvvmeDGPecc5CohZE9Nsun-eo1Eek0WRZ78k2you2RTo1rqvqUrHehbre7JHcFHZJpVZZtvOznoT4m2JkWR7el-MDuSneINPjtffY5n60n72m-fFtMxnnq0UCTes3RCZRdbZTMtASfKQfS06hwCgG88wWiUnxUYAHGGILCASGBLzdQij57vt49h_qrpdjYfd2GUxdpEY0CLjWITsWvKh_qGAOV9hyqowvfFri98LIdL3vhZX95dZanq6Uioj95poXsnhQ_Xv5jBA</recordid><startdate>20190701</startdate><enddate>20190701</enddate><creator>Wang, Brian H.</creator><creator>Chao, Wei-Lun</creator><creator>Wang, Yan</creator><creator>Hariharan, Bharath</creator><creator>Weinberger, Kilian Q.</creator><creator>Campbell, Mark</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><orcidid>https://orcid.org/0000-0002-6373-3864</orcidid><orcidid>https://orcid.org/0000-0002-7398-5854</orcidid><orcidid>https://orcid.org/0000-0003-0775-4297</orcidid></search><sort><creationdate>20190701</creationdate><title>LDLS: 3-D Object Segmentation Through Label Diffusion From 2-D Images</title><author>Wang, Brian H. ; Chao, Wei-Lun ; Wang, Yan ; Hariharan, Bharath ; Weinberger, Kilian Q. ; Campbell, Mark</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c291t-c702a325a32d658751c86a15ce4ba6211cacb226604b2b1999e1ba1e2e1cfd1f3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2019</creationdate><topic>Cameras</topic><topic>Deep learning</topic><topic>Diffusion</topic><topic>Image segmentation</topic><topic>Laser radar</topic><topic>Lidar</topic><topic>Machine learning</topic><topic>Object detection</topic><topic>Pixels</topic><topic>RGB-D perception</topic><topic>Robots</topic><topic>segmentation and categorization</topic><topic>Sensors</topic><topic>Space perception</topic><topic>Task analysis</topic><topic>Three dimensional models</topic><topic>Three-dimensional displays</topic><topic>Training</topic><topic>Two dimensional displays</topic><topic>Two dimensional models</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Wang, Brian H.</creatorcontrib><creatorcontrib>Chao, Wei-Lun</creatorcontrib><creatorcontrib>Wang, Yan</creatorcontrib><creatorcontrib>Hariharan, Bharath</creatorcontrib><creatorcontrib>Weinberger, Kilian Q.</creatorcontrib><creatorcontrib>Campbell, Mark</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>IEEE robotics and automation letters</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Wang, Brian H.</au><au>Chao, Wei-Lun</au><au>Wang, Yan</au><au>Hariharan, Bharath</au><au>Weinberger, Kilian Q.</au><au>Campbell, Mark</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>LDLS: 3-D Object Segmentation Through Label Diffusion From 2-D Images</atitle><jtitle>IEEE robotics and automation letters</jtitle><stitle>LRA</stitle><date>2019-07-01</date><risdate>2019</risdate><volume>4</volume><issue>3</issue><spage>2902</spage><epage>2909</epage><pages>2902-2909</pages><issn>2377-3766</issn><eissn>2377-3766</eissn><coden>IRALC6</coden><abstract>Object segmentation in three-dimensional (3-D) point clouds is a critical task for robots capable of 3-D perception. Despite the impressive performance of deep learning-based approaches on object segmentation in 2-D images, deep learning has not been applied nearly as successfully for 3-D point cloud segmentation. Deep networks generally require large amounts of labeled training data, which are readily available for 2-D images but are difficult to produce for 3-D point clouds. In this letter, we present Label Diffusion Lidar Segmentation (LDLS), a novel approach for 3-D point cloud segmentation, which leverages 2-D segmentation of an RGB image from an aligned camera to avoid the need for training on annotated 3-D data. We obtain 2-D segmentation predictions by applying Mask-RCNN to the RGB image, and then link this image to a 3-D lidar point cloud by building a graph of connections among 3-D points and 2-D pixels. This graph then directs a semi-supervised label diffusion process, where the 2-D pixels act as source nodes that diffuse object label information through the 3-D point cloud, resulting in a complete 3-D point cloud segmentation. We conduct empirical studies on the KITTI benchmark dataset and on a mobile robot, demonstrating wide applicability and superior performance of LDLS compared with the previous state of the art in 3-D point cloud segmentation, without any need for either 3-D training data or fine tuning of the 2-D image segmentation model.</abstract><cop>Piscataway</cop><pub>IEEE</pub><doi>10.1109/LRA.2019.2922582</doi><tpages>8</tpages><orcidid>https://orcid.org/0000-0002-6373-3864</orcidid><orcidid>https://orcid.org/0000-0002-7398-5854</orcidid><orcidid>https://orcid.org/0000-0003-0775-4297</orcidid></addata></record>
fulltext fulltext_linktorsrc
identifier ISSN: 2377-3766
ispartof IEEE robotics and automation letters, 2019-07, Vol.4 (3), p.2902-2909
issn 2377-3766
2377-3766
language eng
recordid cdi_ieee_primary_8735751
source IEEE Electronic Library (IEL)
subjects Cameras
Deep learning
Diffusion
Image segmentation
Laser radar
Lidar
Machine learning
Object detection
Pixels
RGB-D perception
Robots
segmentation and categorization
Sensors
Space perception
Task analysis
Three dimensional models
Three-dimensional displays
Training
Two dimensional displays
Two dimensional models
title LDLS: 3-D Object Segmentation Through Label Diffusion From 2-D Images
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-01T23%3A21%3A59IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=LDLS:%203-D%20Object%20Segmentation%20Through%20Label%20Diffusion%20From%202-D%20Images&rft.jtitle=IEEE%20robotics%20and%20automation%20letters&rft.au=Wang,%20Brian%20H.&rft.date=2019-07-01&rft.volume=4&rft.issue=3&rft.spage=2902&rft.epage=2909&rft.pages=2902-2909&rft.issn=2377-3766&rft.eissn=2377-3766&rft.coden=IRALC6&rft_id=info:doi/10.1109/LRA.2019.2922582&rft_dat=%3Cproquest_RIE%3E2296105713%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2296105713&rft_id=info:pmid/&rft_ieee_id=8735751&rfr_iscdi=true