Geometric Consistency-Guaranteed Spatio-Temporal Transformer for Unsupervised Multiview 3-D Pose Estimation

Unsupervised 3-D pose estimation has gained prominence due to the challenges in acquiring labeled 3-D data for training. Despite promising progress, unsupervised approaches still lag behind supervised methods in performance. Two factors impede the progress of unsupervised approaches: incomplete geom...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on instrumentation and measurement 2024, Vol.73, p.1-12
Hauptverfasser: Dong, Kaiwen, Riou, Kevin, Zhu, Jingwen, Pastor, Andreas, Subrin, Kevin, Zhou, Yu, Yun, Xiao, Sun, Yanjing, Le Callet, Patrick
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Unsupervised 3-D pose estimation has gained prominence due to the challenges in acquiring labeled 3-D data for training. Despite promising progress, unsupervised approaches still lag behind supervised methods in performance. Two factors impede the progress of unsupervised approaches: incomplete geometric constraint and inadequate interaction among spatial, temporal, and multiview features. This article introduces an unsupervised pipeline that uses calibrated camera parameters as geometric constraints across views and coordinate spaces to optimize the model by minimizing inconsistencies between the 2-D input pose and the reprojection of the predicted 3-D pose. This pipeline utilizes the novel hierarchical cross transformer (HCT) to encode higher levels of information by enabling interactions among hierarchical features containing different levels of temporal, spatial, and cross-view information. By minimizing the reliance on human-specific parts, the HCT shows potential for adapting to various pose estimation tasks. To validate the adaptability, we build a connection between human pose estimation and scene pose estimation, introducing a dynamic-keypoints-3-D (DKs-3D) dataset tailored for 3-D scene pose estimation in robotic manipulation. Experiments on two 3-D human pose estimation datasets demonstrate our method's new state-of-the-art performance among weakly and unsupervised approaches. The adaptability of our method is confirmed through experiments on DK-3D, setting the initial benchmark for unsupervised 2-D-to-3-D scene pose lifting.
ISSN:0018-9456
1557-9662
DOI:10.1109/TIM.2024.3440376