Scene depth and camera position and posture solving method based on deep learning

The invention discloses a scene depth and camera position and posture solving method based on deep learning. The method employs a convolutional neural network, employs an image sequence as input, and employs a recurrent neural network to estimate the scene depth and the camera position and posture p...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: QUAN HONGYAN, YAO MINGWEI
Format: Patent
Sprache:chi ; eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:The invention discloses a scene depth and camera position and posture solving method based on deep learning. The method employs a convolutional neural network, employs an image sequence as input, and employs a recurrent neural network to estimate the scene depth and the camera position and posture parameters of two adjacent images. According to the method, a multi-task learning framework is adopted, and the loss function of the network is defined by utilizing the consistency of the geometric information of the three-dimensional scene reconstructed by two adjacent images in the sequence, so that the scene depth and the accuracy of the camera position and posture estimation are ensured. 本发明公开了一种基于深度学习的场景深度和摄像机位置姿势求解方法,该方法利用卷积神经网络,使用图像序列作为输入,采用循环神经网络估计场景深度及相邻两幅图像的摄像机位置姿势参数。本发明采用多任务学习框架,利用序列中相邻两幅图像重建的三维场景几何信息的一致性定义网络的损失函数,以确保场景深度和摄像机位置姿势估计的准确性。