Smart Glove and Hand Gesture-Based Control Interface for Multi-Rotor Aerial Vehicles in a Multi-Subject Environment

This paper introduces an adaptable, human-computer interaction method to control multi-rotor aerial vehicles in unsupervised, multi-subject environments. A region-based convolutional neural network (R-CNN) first detects subjects in a frame and their faces' regions of interest (RoIs), which are...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE access 2020, Vol.8, p.227667-227677
Hauptverfasser: Haratiannejadi, Kianoush, Selmic, Rastko R.
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:This paper introduces an adaptable, human-computer interaction method to control multi-rotor aerial vehicles in unsupervised, multi-subject environments. A region-based convolutional neural network (R-CNN) first detects subjects in a frame and their faces' regions of interest (RoIs), which are then fed to a facial recognition module to search for the main subject within the frame. The R-CNN model supplies the right-hand RoI of the main subject to a convolutional neural network (CNN) that classifies the right-hand gesture. A motion processing unit (MPU) and four flex sensors are embedded in the left hand's smart glove to produce discrete and continuous signals. Those signals are generated based on the bending of left-hand fingers and the left hand's roll angle and then fed to a support vector machine (SVM) to classify the left-hand gesture. Three validation layers have been implemented, including a human-based validation, classification validation, and the system validation. The comprehensive experimental results have validated the proposed method.
ISSN:2169-3536
2169-3536
DOI:10.1109/ACCESS.2020.3045858