Hand Gesture Recognition Across Various Limb Positions Using a Multimodal Sensing System Based on Self-Adaptive Data-Fusion and Convolutional Neural Networks (CNNs)
This study explores the challenge of hand gesture recognition across various limb positions using a new co-located multimodal armband system incorporating surface electromyography (sEMG) and pressure-based force myography (pFMG) sensors. Conventional machine learning (ML) algorithms and convolutiona...
Gespeichert in:
Veröffentlicht in: | IEEE sensors journal 2024-06, Vol.24 (11), p.18633-18645 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | This study explores the challenge of hand gesture recognition across various limb positions using a new co-located multimodal armband system incorporating surface electromyography (sEMG) and pressure-based force myography (pFMG) sensors. Conventional machine learning (ML) algorithms and convolutional neural network models (CNNs) were evaluated for accurately recognizing hand gestures. A comprehensive investigation was conducted, encompassing feature-level and decision-level CNN models, alongside advanced fusion techniques to enhance the recognition performance. This research consistently demonstrates the superiority of CNN models, revealing their potential in extracting intricate patterns from raw multimodal sensor data. The study showcased significant accuracy improvements over single-modality approaches, emphasizing the synergistic effects of multimodal sensing. Notably, the CNN models achieved and 88.34% accuracy for self-adaptive decision-level fusion and 87.79% accuracy for feature-level fusion, outperforming the linear discriminant analysis (LDA) with 83.33% accuracy when considering all nine gestures. Furthermore, the study explores the relationship between the number of hand gestures and recognition accuracy, revealing consistently high accuracy levels ranging from 88% to 100% for 2-9 gestures and a remarkable 98% accuracy for the commonly used five gestures. This research underscores the adaptability of CNNs in effectively capturing the complex complementation between multimodal data and varying limb positions, advancing the field of gesture recognition, and emphasizing the potential of high-level data-fusion deep learning (DL) techniques in wearable sensing systems. This study provides valuable contributions to how multimodal sensor/data fusion, coupled with advanced ML methods, enhances hand gesture recognition accuracy, ultimately paving the way for more effective and adaptable wearable technology applications. |
---|---|
ISSN: | 1530-437X 1558-1748 |
DOI: | 10.1109/JSEN.2024.3389963 |