Sparse2Dense: From Direct Sparse Odometry to Dense 3-D Reconstruction

In this letter, we proposed a new deep learning based dense monocular simultaneous localization and mapping (SLAM) method. Compared to existing methods, the proposed framework constructs a dense three-dimensional (3-D) model via a sparse to dense mapping using learned surface normals. With single vi...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE robotics and automation letters 2019-04, Vol.4 (2), p.530-537
Hauptverfasser: Jiexiong Tang, Folkesson, John, Jensfelt, Patric
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 537
container_issue 2
container_start_page 530
container_title IEEE robotics and automation letters
container_volume 4
creator Jiexiong Tang
Folkesson, John
Jensfelt, Patric
description In this letter, we proposed a new deep learning based dense monocular simultaneous localization and mapping (SLAM) method. Compared to existing methods, the proposed framework constructs a dense three-dimensional (3-D) model via a sparse to dense mapping using learned surface normals. With single view learned depth estimation as prior for monocular visual odometry, we obtain both accurate positioning and high-quality depth reconstruction. The depth and normal are predicted by a single network trained in a tightly coupled manner. Experimental results show that our method significantly improves the performance of visual tracking and depth prediction in comparison to the state-of-the-art in deep monocular dense SLAM.
doi_str_mv 10.1109/LRA.2019.2891433
format Article
fullrecord <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_ieee_primary_8605349</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>8605349</ieee_id><sourcerecordid>2298336455</sourcerecordid><originalsourceid>FETCH-LOGICAL-c329t-fd4b4cf22beb39d13a6f3a50ff22e4fc6b60705749e4e5bbc7ba0ad7e7997b563</originalsourceid><addsrcrecordid>eNpNkM1Lw0AQxRdRsNTeBS8LnlP3e7veStOqUCjUj-uSTSaaarN1N0H635uaUjzNMPN7j5mH0DUlY0qJuVuup2NGqBmziaGC8zM0YFzrhGulzv_1l2gU44YQQiXT3MgBmj_vshCBpVBHuMeL4Lc4rQLkDe43eFX4LTRhjxuP_yjMkxSvIfd1bEKbN5Wvr9BFmX1FGB3rEL0u5i-zx2S5eniaTZdJzplpkrIQTuQlYw4cNwXlmSp5JknZjUCUuXKKaCK1MCBAOpdrl5Gs0KCN0U4qPkRJ7xt_YNc6uwvVNgt767PKptXb1Prwbj-bD8sEN92HQ3Tb87vgv1uIjd34NtTdiZYxM-FcCSk7ivRUHnyMAcqTLyX2kK_t8rWHfO0x305y00sqADjhE0UkF4b_AuqPdYE</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2298336455</pqid></control><display><type>article</type><title>Sparse2Dense: From Direct Sparse Odometry to Dense 3-D Reconstruction</title><source>IEEE Electronic Library (IEL)</source><creator>Jiexiong Tang ; Folkesson, John ; Jensfelt, Patric</creator><creatorcontrib>Jiexiong Tang ; Folkesson, John ; Jensfelt, Patric</creatorcontrib><description>In this letter, we proposed a new deep learning based dense monocular simultaneous localization and mapping (SLAM) method. Compared to existing methods, the proposed framework constructs a dense three-dimensional (3-D) model via a sparse to dense mapping using learned surface normals. With single view learned depth estimation as prior for monocular visual odometry, we obtain both accurate positioning and high-quality depth reconstruction. The depth and normal are predicted by a single network trained in a tightly coupled manner. Experimental results show that our method significantly improves the performance of visual tracking and depth prediction in comparison to the state-of-the-art in deep monocular dense SLAM.</description><identifier>ISSN: 2377-3766</identifier><identifier>EISSN: 2377-3766</identifier><identifier>DOI: 10.1109/LRA.2019.2891433</identifier><identifier>CODEN: IRALC6</identifier><language>eng</language><publisher>Piscataway: IEEE</publisher><subject>Deep learning ; deep learning in robotics and automation ; Estimation ; Image reconstruction ; Machine learning ; Odometers ; Optical tracking ; Performance enhancement ; Predictions ; Reconstruction ; Simultaneous localization and mapping ; SLAM ; Three dimensional models ; Three-dimensional displays ; Training ; Visual-based navigation ; Visualization</subject><ispartof>IEEE robotics and automation letters, 2019-04, Vol.4 (2), p.530-537</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2019</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c329t-fd4b4cf22beb39d13a6f3a50ff22e4fc6b60705749e4e5bbc7ba0ad7e7997b563</citedby><cites>FETCH-LOGICAL-c329t-fd4b4cf22beb39d13a6f3a50ff22e4fc6b60705749e4e5bbc7ba0ad7e7997b563</cites><orcidid>0000-0002-1170-7162 ; 0000-0002-7796-1438 ; 0000-0003-2482-3469</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/8605349$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>230,314,776,780,792,881,27901,27902,54733</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/8605349$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc><backlink>$$Uhttps://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-243927$$DView record from Swedish Publication Index$$Hfree_for_read</backlink></links><search><creatorcontrib>Jiexiong Tang</creatorcontrib><creatorcontrib>Folkesson, John</creatorcontrib><creatorcontrib>Jensfelt, Patric</creatorcontrib><title>Sparse2Dense: From Direct Sparse Odometry to Dense 3-D Reconstruction</title><title>IEEE robotics and automation letters</title><addtitle>LRA</addtitle><description>In this letter, we proposed a new deep learning based dense monocular simultaneous localization and mapping (SLAM) method. Compared to existing methods, the proposed framework constructs a dense three-dimensional (3-D) model via a sparse to dense mapping using learned surface normals. With single view learned depth estimation as prior for monocular visual odometry, we obtain both accurate positioning and high-quality depth reconstruction. The depth and normal are predicted by a single network trained in a tightly coupled manner. Experimental results show that our method significantly improves the performance of visual tracking and depth prediction in comparison to the state-of-the-art in deep monocular dense SLAM.</description><subject>Deep learning</subject><subject>deep learning in robotics and automation</subject><subject>Estimation</subject><subject>Image reconstruction</subject><subject>Machine learning</subject><subject>Odometers</subject><subject>Optical tracking</subject><subject>Performance enhancement</subject><subject>Predictions</subject><subject>Reconstruction</subject><subject>Simultaneous localization and mapping</subject><subject>SLAM</subject><subject>Three dimensional models</subject><subject>Three-dimensional displays</subject><subject>Training</subject><subject>Visual-based navigation</subject><subject>Visualization</subject><issn>2377-3766</issn><issn>2377-3766</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2019</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNpNkM1Lw0AQxRdRsNTeBS8LnlP3e7veStOqUCjUj-uSTSaaarN1N0H635uaUjzNMPN7j5mH0DUlY0qJuVuup2NGqBmziaGC8zM0YFzrhGulzv_1l2gU44YQQiXT3MgBmj_vshCBpVBHuMeL4Lc4rQLkDe43eFX4LTRhjxuP_yjMkxSvIfd1bEKbN5Wvr9BFmX1FGB3rEL0u5i-zx2S5eniaTZdJzplpkrIQTuQlYw4cNwXlmSp5JknZjUCUuXKKaCK1MCBAOpdrl5Gs0KCN0U4qPkRJ7xt_YNc6uwvVNgt767PKptXb1Prwbj-bD8sEN92HQ3Tb87vgv1uIjd34NtTdiZYxM-FcCSk7ivRUHnyMAcqTLyX2kK_t8rWHfO0x305y00sqADjhE0UkF4b_AuqPdYE</recordid><startdate>20190401</startdate><enddate>20190401</enddate><creator>Jiexiong Tang</creator><creator>Folkesson, John</creator><creator>Jensfelt, Patric</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>ADTPV</scope><scope>AOWAS</scope><scope>D8V</scope><orcidid>https://orcid.org/0000-0002-1170-7162</orcidid><orcidid>https://orcid.org/0000-0002-7796-1438</orcidid><orcidid>https://orcid.org/0000-0003-2482-3469</orcidid></search><sort><creationdate>20190401</creationdate><title>Sparse2Dense: From Direct Sparse Odometry to Dense 3-D Reconstruction</title><author>Jiexiong Tang ; Folkesson, John ; Jensfelt, Patric</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c329t-fd4b4cf22beb39d13a6f3a50ff22e4fc6b60705749e4e5bbc7ba0ad7e7997b563</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2019</creationdate><topic>Deep learning</topic><topic>deep learning in robotics and automation</topic><topic>Estimation</topic><topic>Image reconstruction</topic><topic>Machine learning</topic><topic>Odometers</topic><topic>Optical tracking</topic><topic>Performance enhancement</topic><topic>Predictions</topic><topic>Reconstruction</topic><topic>Simultaneous localization and mapping</topic><topic>SLAM</topic><topic>Three dimensional models</topic><topic>Three-dimensional displays</topic><topic>Training</topic><topic>Visual-based navigation</topic><topic>Visualization</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Jiexiong Tang</creatorcontrib><creatorcontrib>Folkesson, John</creatorcontrib><creatorcontrib>Jensfelt, Patric</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>SwePub</collection><collection>SwePub Articles</collection><collection>SWEPUB Kungliga Tekniska Högskolan</collection><jtitle>IEEE robotics and automation letters</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Jiexiong Tang</au><au>Folkesson, John</au><au>Jensfelt, Patric</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Sparse2Dense: From Direct Sparse Odometry to Dense 3-D Reconstruction</atitle><jtitle>IEEE robotics and automation letters</jtitle><stitle>LRA</stitle><date>2019-04-01</date><risdate>2019</risdate><volume>4</volume><issue>2</issue><spage>530</spage><epage>537</epage><pages>530-537</pages><issn>2377-3766</issn><eissn>2377-3766</eissn><coden>IRALC6</coden><abstract>In this letter, we proposed a new deep learning based dense monocular simultaneous localization and mapping (SLAM) method. Compared to existing methods, the proposed framework constructs a dense three-dimensional (3-D) model via a sparse to dense mapping using learned surface normals. With single view learned depth estimation as prior for monocular visual odometry, we obtain both accurate positioning and high-quality depth reconstruction. The depth and normal are predicted by a single network trained in a tightly coupled manner. Experimental results show that our method significantly improves the performance of visual tracking and depth prediction in comparison to the state-of-the-art in deep monocular dense SLAM.</abstract><cop>Piscataway</cop><pub>IEEE</pub><doi>10.1109/LRA.2019.2891433</doi><tpages>8</tpages><orcidid>https://orcid.org/0000-0002-1170-7162</orcidid><orcidid>https://orcid.org/0000-0002-7796-1438</orcidid><orcidid>https://orcid.org/0000-0003-2482-3469</orcidid></addata></record>
fulltext fulltext_linktorsrc
identifier ISSN: 2377-3766
ispartof IEEE robotics and automation letters, 2019-04, Vol.4 (2), p.530-537
issn 2377-3766
2377-3766
language eng
recordid cdi_ieee_primary_8605349
source IEEE Electronic Library (IEL)
subjects Deep learning
deep learning in robotics and automation
Estimation
Image reconstruction
Machine learning
Odometers
Optical tracking
Performance enhancement
Predictions
Reconstruction
Simultaneous localization and mapping
SLAM
Three dimensional models
Three-dimensional displays
Training
Visual-based navigation
Visualization
title Sparse2Dense: From Direct Sparse Odometry to Dense 3-D Reconstruction
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-12T09%3A00%3A50IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Sparse2Dense:%20From%20Direct%20Sparse%20Odometry%20to%20Dense%203-D%20Reconstruction&rft.jtitle=IEEE%20robotics%20and%20automation%20letters&rft.au=Jiexiong%20Tang&rft.date=2019-04-01&rft.volume=4&rft.issue=2&rft.spage=530&rft.epage=537&rft.pages=530-537&rft.issn=2377-3766&rft.eissn=2377-3766&rft.coden=IRALC6&rft_id=info:doi/10.1109/LRA.2019.2891433&rft_dat=%3Cproquest_RIE%3E2298336455%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2298336455&rft_id=info:pmid/&rft_ieee_id=8605349&rfr_iscdi=true