SGANVO: Unsupervised Deep Visual Odometry and Depth Estimation With Stacked Generative Adversarial Networks

Recently end-to-end unsupervised deep learning methods have demonstrated an impressive performance for visual depth and ego-motion estimation tasks. These data-based learning methods do not rely on the same limiting assumptions that geometry-based methods do. The encoder-decoder network has been wid...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE robotics and automation letters 2019-10, Vol.4 (4), p.4431-4437
Hauptverfasser: Feng, Tuo, Gu, Dongbing
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 4437
container_issue 4
container_start_page 4431
container_title IEEE robotics and automation letters
container_volume 4
creator Feng, Tuo
Gu, Dongbing
description Recently end-to-end unsupervised deep learning methods have demonstrated an impressive performance for visual depth and ego-motion estimation tasks. These data-based learning methods do not rely on the same limiting assumptions that geometry-based methods do. The encoder-decoder network has been widely used in the depth estimation and the RCNN has brought significant improvements in the ego-motion estimation. Furthermore, the latest use of generative adversarial nets (GANs) in depth and ego-motion estimation has demonstrated that the estimation could be further improved by generating pictures in the game learning process. This paper proposes a novel unsupervised network system for visual depth and ego-motion estimation- stacked generative adversarial network. It consists of a stack of GAN layers, of which the lowest layer estimates the depth and egomotion while the higher layers estimate the spatial features. It can also capture the temporal dynamic due to the use of a recurrent representation across the layers. We select the most commonly used KITTI data set for evaluation. The evaluation results show that our proposed method can produce better or comparable results in depth and ego-motion estimation.
doi_str_mv 10.1109/LRA.2019.2925555
format Article
fullrecord <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_proquest_journals_2310671731</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>8747446</ieee_id><sourcerecordid>2310671731</sourcerecordid><originalsourceid>FETCH-LOGICAL-c338t-e5fc8b036f21648563e83d031d49db0084c254b47ec1cd456e41e0144940016a3</originalsourceid><addsrcrecordid>eNpNUE1Lw0AQXUTBUnsXvCx4Tt2v7CbeQq1VKBasrcclTSa4_Uji7qbSf--WFnEu8_XeG-YhdEvJkFKSPkzfsyEjNB2ylMUhLlCPcaUirqS8_Fdfo4Fza0IIjZniadxDm_kke1vOHvGidl0Ldm8clPgJoMVL47p8i2dlswNvDzivj4vWf-Gx82aXe9PU-NOEfu7zYhNoE6jBhvkecFbuwbrcmqDwBv6nsRt3g66qfOtgcM59tHgef4xeouls8jrKplHBeeIjiKsiWREuK0alSGLJIeEl4bQUabkiJBEFi8VKKChoUYpYgqBAqBCpCI_JnPfR_Um3tc13B87rddPZOpzUjFMiFVWcBhQ5oQrbOGeh0q0NX9mDpkQfXdXBVX10VZ9dDZS7E8UAwB88UUIJIfkvx3JyDg</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2310671731</pqid></control><display><type>article</type><title>SGANVO: Unsupervised Deep Visual Odometry and Depth Estimation With Stacked Generative Adversarial Networks</title><source>IEEE Electronic Library (IEL)</source><creator>Feng, Tuo ; Gu, Dongbing</creator><creatorcontrib>Feng, Tuo ; Gu, Dongbing</creatorcontrib><description>Recently end-to-end unsupervised deep learning methods have demonstrated an impressive performance for visual depth and ego-motion estimation tasks. These data-based learning methods do not rely on the same limiting assumptions that geometry-based methods do. The encoder-decoder network has been widely used in the depth estimation and the RCNN has brought significant improvements in the ego-motion estimation. Furthermore, the latest use of generative adversarial nets (GANs) in depth and ego-motion estimation has demonstrated that the estimation could be further improved by generating pictures in the game learning process. This paper proposes a novel unsupervised network system for visual depth and ego-motion estimation- stacked generative adversarial network. It consists of a stack of GAN layers, of which the lowest layer estimates the depth and egomotion while the higher layers estimate the spatial features. It can also capture the temporal dynamic due to the use of a recurrent representation across the layers. We select the most commonly used KITTI data set for evaluation. The evaluation results show that our proposed method can produce better or comparable results in depth and ego-motion estimation.</description><identifier>ISSN: 2377-3766</identifier><identifier>EISSN: 2377-3766</identifier><identifier>DOI: 10.1109/LRA.2019.2925555</identifier><identifier>CODEN: IRALC6</identifier><language>eng</language><publisher>Piscataway: IEEE</publisher><subject>Coders ; Deep Learning in Robotics and Automation ; Estimation ; Gallium nitride ; Generative adversarial networks ; Generators ; Image reconstruction ; Localization ; Machine learning ; Mapping ; Motion simulation ; Odometers ; Pictures ; SLAM ; Teaching methods ; Training ; Visual odometry</subject><ispartof>IEEE robotics and automation letters, 2019-10, Vol.4 (4), p.4431-4437</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2019</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c338t-e5fc8b036f21648563e83d031d49db0084c254b47ec1cd456e41e0144940016a3</citedby><cites>FETCH-LOGICAL-c338t-e5fc8b036f21648563e83d031d49db0084c254b47ec1cd456e41e0144940016a3</cites><orcidid>0000-0002-3640-9532 ; 0000-0002-0986-2921</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/8747446$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,777,781,793,27905,27906,54739</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/8747446$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Feng, Tuo</creatorcontrib><creatorcontrib>Gu, Dongbing</creatorcontrib><title>SGANVO: Unsupervised Deep Visual Odometry and Depth Estimation With Stacked Generative Adversarial Networks</title><title>IEEE robotics and automation letters</title><addtitle>LRA</addtitle><description>Recently end-to-end unsupervised deep learning methods have demonstrated an impressive performance for visual depth and ego-motion estimation tasks. These data-based learning methods do not rely on the same limiting assumptions that geometry-based methods do. The encoder-decoder network has been widely used in the depth estimation and the RCNN has brought significant improvements in the ego-motion estimation. Furthermore, the latest use of generative adversarial nets (GANs) in depth and ego-motion estimation has demonstrated that the estimation could be further improved by generating pictures in the game learning process. This paper proposes a novel unsupervised network system for visual depth and ego-motion estimation- stacked generative adversarial network. It consists of a stack of GAN layers, of which the lowest layer estimates the depth and egomotion while the higher layers estimate the spatial features. It can also capture the temporal dynamic due to the use of a recurrent representation across the layers. We select the most commonly used KITTI data set for evaluation. The evaluation results show that our proposed method can produce better or comparable results in depth and ego-motion estimation.</description><subject>Coders</subject><subject>Deep Learning in Robotics and Automation</subject><subject>Estimation</subject><subject>Gallium nitride</subject><subject>Generative adversarial networks</subject><subject>Generators</subject><subject>Image reconstruction</subject><subject>Localization</subject><subject>Machine learning</subject><subject>Mapping</subject><subject>Motion simulation</subject><subject>Odometers</subject><subject>Pictures</subject><subject>SLAM</subject><subject>Teaching methods</subject><subject>Training</subject><subject>Visual odometry</subject><issn>2377-3766</issn><issn>2377-3766</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2019</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNpNUE1Lw0AQXUTBUnsXvCx4Tt2v7CbeQq1VKBasrcclTSa4_Uji7qbSf--WFnEu8_XeG-YhdEvJkFKSPkzfsyEjNB2ylMUhLlCPcaUirqS8_Fdfo4Fza0IIjZniadxDm_kke1vOHvGidl0Ldm8clPgJoMVL47p8i2dlswNvDzivj4vWf-Gx82aXe9PU-NOEfu7zYhNoE6jBhvkecFbuwbrcmqDwBv6nsRt3g66qfOtgcM59tHgef4xeouls8jrKplHBeeIjiKsiWREuK0alSGLJIeEl4bQUabkiJBEFi8VKKChoUYpYgqBAqBCpCI_JnPfR_Um3tc13B87rddPZOpzUjFMiFVWcBhQ5oQrbOGeh0q0NX9mDpkQfXdXBVX10VZ9dDZS7E8UAwB88UUIJIfkvx3JyDg</recordid><startdate>20191001</startdate><enddate>20191001</enddate><creator>Feng, Tuo</creator><creator>Gu, Dongbing</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><orcidid>https://orcid.org/0000-0002-3640-9532</orcidid><orcidid>https://orcid.org/0000-0002-0986-2921</orcidid></search><sort><creationdate>20191001</creationdate><title>SGANVO: Unsupervised Deep Visual Odometry and Depth Estimation With Stacked Generative Adversarial Networks</title><author>Feng, Tuo ; Gu, Dongbing</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c338t-e5fc8b036f21648563e83d031d49db0084c254b47ec1cd456e41e0144940016a3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2019</creationdate><topic>Coders</topic><topic>Deep Learning in Robotics and Automation</topic><topic>Estimation</topic><topic>Gallium nitride</topic><topic>Generative adversarial networks</topic><topic>Generators</topic><topic>Image reconstruction</topic><topic>Localization</topic><topic>Machine learning</topic><topic>Mapping</topic><topic>Motion simulation</topic><topic>Odometers</topic><topic>Pictures</topic><topic>SLAM</topic><topic>Teaching methods</topic><topic>Training</topic><topic>Visual odometry</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Feng, Tuo</creatorcontrib><creatorcontrib>Gu, Dongbing</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>IEEE robotics and automation letters</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Feng, Tuo</au><au>Gu, Dongbing</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>SGANVO: Unsupervised Deep Visual Odometry and Depth Estimation With Stacked Generative Adversarial Networks</atitle><jtitle>IEEE robotics and automation letters</jtitle><stitle>LRA</stitle><date>2019-10-01</date><risdate>2019</risdate><volume>4</volume><issue>4</issue><spage>4431</spage><epage>4437</epage><pages>4431-4437</pages><issn>2377-3766</issn><eissn>2377-3766</eissn><coden>IRALC6</coden><abstract>Recently end-to-end unsupervised deep learning methods have demonstrated an impressive performance for visual depth and ego-motion estimation tasks. These data-based learning methods do not rely on the same limiting assumptions that geometry-based methods do. The encoder-decoder network has been widely used in the depth estimation and the RCNN has brought significant improvements in the ego-motion estimation. Furthermore, the latest use of generative adversarial nets (GANs) in depth and ego-motion estimation has demonstrated that the estimation could be further improved by generating pictures in the game learning process. This paper proposes a novel unsupervised network system for visual depth and ego-motion estimation- stacked generative adversarial network. It consists of a stack of GAN layers, of which the lowest layer estimates the depth and egomotion while the higher layers estimate the spatial features. It can also capture the temporal dynamic due to the use of a recurrent representation across the layers. We select the most commonly used KITTI data set for evaluation. The evaluation results show that our proposed method can produce better or comparable results in depth and ego-motion estimation.</abstract><cop>Piscataway</cop><pub>IEEE</pub><doi>10.1109/LRA.2019.2925555</doi><tpages>7</tpages><orcidid>https://orcid.org/0000-0002-3640-9532</orcidid><orcidid>https://orcid.org/0000-0002-0986-2921</orcidid></addata></record>
fulltext fulltext_linktorsrc
identifier ISSN: 2377-3766
ispartof IEEE robotics and automation letters, 2019-10, Vol.4 (4), p.4431-4437
issn 2377-3766
2377-3766
language eng
recordid cdi_proquest_journals_2310671731
source IEEE Electronic Library (IEL)
subjects Coders
Deep Learning in Robotics and Automation
Estimation
Gallium nitride
Generative adversarial networks
Generators
Image reconstruction
Localization
Machine learning
Mapping
Motion simulation
Odometers
Pictures
SLAM
Teaching methods
Training
Visual odometry
title SGANVO: Unsupervised Deep Visual Odometry and Depth Estimation With Stacked Generative Adversarial Networks
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-19T23%3A03%3A39IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=SGANVO:%20Unsupervised%20Deep%20Visual%20Odometry%20and%20Depth%20Estimation%20With%20Stacked%20Generative%20Adversarial%20Networks&rft.jtitle=IEEE%20robotics%20and%20automation%20letters&rft.au=Feng,%20Tuo&rft.date=2019-10-01&rft.volume=4&rft.issue=4&rft.spage=4431&rft.epage=4437&rft.pages=4431-4437&rft.issn=2377-3766&rft.eissn=2377-3766&rft.coden=IRALC6&rft_id=info:doi/10.1109/LRA.2019.2925555&rft_dat=%3Cproquest_RIE%3E2310671731%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2310671731&rft_id=info:pmid/&rft_ieee_id=8747446&rfr_iscdi=true