MuTr: Multi-Stage Transformer for Hand Pose Estimation from Full-Scene Depth Image

This work presents a novel transformer-based method for hand pose estimation-DePOTR. We test the DePOTR method on four benchmark datasets, where DePOTR outperforms other transformer-based methods while achieving results on par with other state-of-the-art methods. To further demonstrate the strength...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Sensors (Basel, Switzerland) Switzerland), 2023-06, Vol.23 (12), p.5509
Hauptverfasser: Kanis, Jakub, Gruber, Ivan, Krňoul, Zdeněk, Boháček, Matyáš, Straka, Jakub, Hrúz, Marek
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:This work presents a novel transformer-based method for hand pose estimation-DePOTR. We test the DePOTR method on four benchmark datasets, where DePOTR outperforms other transformer-based methods while achieving results on par with other state-of-the-art methods. To further demonstrate the strength of DePOTR, we propose a novel multi-stage approach from full-scene depth image-MuTr. MuTr removes the necessity of having two different models in the hand pose estimation pipeline-one for hand localization and one for pose estimation-while maintaining promising results. To the best of our knowledge, this is the first successful attempt to use the same model architecture in standard and simultaneously in full-scene image setup while achieving competitive results in both of them. On the NYU dataset, DePOTR and MuTr reach precision equal to 7.85 mm and 8.71 mm, respectively.
ISSN:1424-8220
1424-8220
DOI:10.3390/s23125509