SGANVO: Unsupervised Deep Visual Odometry and Depth Estimation With Stacked Generative Adversarial Networks

Recently end-to-end unsupervised deep learning methods have demonstrated an impressive performance for visual depth and ego-motion estimation tasks. These data-based learning methods do not rely on the same limiting assumptions that geometry-based methods do. The encoder-decoder network has been wid...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE robotics and automation letters 2019-10, Vol.4 (4), p.4431-4437
Hauptverfasser: Feng, Tuo, Gu, Dongbing
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Recently end-to-end unsupervised deep learning methods have demonstrated an impressive performance for visual depth and ego-motion estimation tasks. These data-based learning methods do not rely on the same limiting assumptions that geometry-based methods do. The encoder-decoder network has been widely used in the depth estimation and the RCNN has brought significant improvements in the ego-motion estimation. Furthermore, the latest use of generative adversarial nets (GANs) in depth and ego-motion estimation has demonstrated that the estimation could be further improved by generating pictures in the game learning process. This paper proposes a novel unsupervised network system for visual depth and ego-motion estimation- stacked generative adversarial network. It consists of a stack of GAN layers, of which the lowest layer estimates the depth and egomotion while the higher layers estimate the spatial features. It can also capture the temporal dynamic due to the use of a recurrent representation across the layers. We select the most commonly used KITTI data set for evaluation. The evaluation results show that our proposed method can produce better or comparable results in depth and ego-motion estimation.
ISSN:2377-3766
2377-3766
DOI:10.1109/LRA.2019.2925555