Non-Invasive Tongue-Based HCI System Using Deep Learning for Microgesture Detection
Tongue-based Human-Computer Interaction (HCI) systems have surfaced as alternative input devices offering significant benefits to individuals with severe disabilities. However, these systems often employ invasive methods such as dental retainers, tongue piercings, and multiple mouth electrodes. Thes...
Gespeichert in:
Veröffentlicht in: | Revue d'Intelligence Artificielle 2023-08, Vol.37 (4), p.985-995 |
---|---|
Hauptverfasser: | , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Tongue-based Human-Computer Interaction (HCI) systems have surfaced as alternative input devices offering significant benefits to individuals with severe disabilities. However, these systems often employ invasive methods such as dental retainers, tongue piercings, and multiple mouth electrodes. These methods, due to hygiene issues and obtrusiveness, are deemed impractical for daily use. This paper presents a novel non-invasive tongue-based HCI system that utilizes deep learning for microgesture detection. The proposed system overcomes the limitations of previous methods by non-invasively detecting gestures. This is accomplished by measuring tongue vibrations via an accelerometer positioned on the Genioglossus muscle, thereby eliminating the need for in-mouth installations. The system's performance was evaluated by comparing the classification results of deep learning with four widely-used supervised machine learning algorithms, namely K-Nearest Neighbors (KNN), Support Vector Machines (SVM), Decision Trees, and Random Forests. Raw data were preprocessed in both time and frequency domains to extract relevant patterns before classification. In addition, a deep learning Convolutional Neural Network (CNN) model was trained on the raw data, leveraging its proficiency in processing time series data and capturing intricate patterns automatically using convolutional and pooling layers. The CNN model demonstrated a 97% success rate in tongue gesture detection, indicating its high accuracy. The proposed system is also low-profile, lightweight, and cost-effective, making it suitable for daily use in various contexts. This study thus introduces a non-invasive, efficient, and practical approach to tongue-based HCI systems. |
---|---|
ISSN: | 0992-499X 1958-5748 |
DOI: | 10.18280/ria.370420 |