Adaptive sensor fusion labeling framework for hand pose recognition in robot teleoperation

Purpose The purpose of this paper is to mainly center on the touchless interaction between humans and robots in the real world. The accuracy of hand pose identification and stable operation in a non-stationary environment is the main challenge, especially in multiple sensors conditions. To guarantee...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Assembly automation 2021-07, Vol.41 (3), p.393-400
Hauptverfasser: Qi, Wen, Liu, Xiaorui, Zhang, Longbin, Wu, Lunan, Zang, Wenchuan, Su, Hang
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Purpose The purpose of this paper is to mainly center on the touchless interaction between humans and robots in the real world. The accuracy of hand pose identification and stable operation in a non-stationary environment is the main challenge, especially in multiple sensors conditions. To guarantee the human-machine interaction system’s performance with a high recognition rate and lower computational time, an adaptive sensor fusion labeling framework should be considered in surgery robot teleoperation. Design/methodology/approach In this paper, a hand pose estimation model is proposed consisting of automatic labeling and classified based on a deep convolutional neural networks (DCNN) structure. Subsequently, an adaptive sensor fusion methodology is proposed for hand pose estimation with two leap motions. The sensor fusion system is implemented to process depth data and electromyography signals capturing from Myo Armband and leap motion, respectively. The developed adaptive methodology can perform stable and continuous hand position estimation even when a single sensor is unable to detect a hand. Findings The proposed adaptive sensor fusion method is verified with various experiments in six degrees of freedom in space. The results showed that the clustering model acquires the highest clustering accuracy (96.31%) than other methods, which can be regarded as real gestures. Moreover, the DCNN classifier gets the highest performance (88.47% accuracy and lowest computational time) than other methods. Originality/value This study can provide theoretical and engineering guidance for hand pose recognition in surgery robot teleoperation and design a new deep learning model for accuracy enhancement.
ISSN:0144-5154
2754-6969
1758-4078
1758-4078
2754-6977
DOI:10.1108/AA-11-2020-0178