Efficient machine learning approach for volunteer eye-blink detection in real-time using webcam

The progressive diminishment of motor capacities due to Amyotrophic Lateral Sclerosis (ALS) causes a severe communication deficit. The development of Alternative Communication software aids ALS patients in overcoming communication issues and the detection of communication signals plays a big role in...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Expert systems with applications 2022-02, Vol.188, p.116073, Article 116073
Hauptverfasser: Medeiros, Paulo Augusto de Lima, Silva, Gabriel Vinícius Souza da, Fernandes, Felipe Ricardo dos Santos, Sánchez-Gendriz, Ignacio, Lins, Hertz Wilton Castro, Barros, Daniele Montenegro da Silva, Nagem, Danilo Alves Pinto, Valentim, Ricardo Alexsandro de Medeiros
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:The progressive diminishment of motor capacities due to Amyotrophic Lateral Sclerosis (ALS) causes a severe communication deficit. The development of Alternative Communication software aids ALS patients in overcoming communication issues and the detection of communication signals plays a big role in this task. In this paper, volunteer eye-blinking is proposed as human–computer interaction signal and an intelligent Computer Vision detector was built for handling the captured data in real-time using a generic webcam. The eye-blink detection was treated as an extension of the eye-state classification, and the base pipeline used is delineated as follows: face detection, face alignment, region-of-interest (ROI) extraction, and eye-state classification. Furthermore, this pipeline was complemented with auxiliary models: a rotation compensator, a ROIs evaluator, and a moving average filter. Two new datasets were created: the Youtube Eye-state Classification (YEC) dataset, built from the AVSpeech dataset by extracting face images; and the Autonomus Blink Dataset (ABD), built completely as a result of the present work. The YEC allowed training the eye-classification task; ABD was specifically idealized taking into consideration volunteer eye-blinking detection. The proposed models, a Convolutional Neural Network (CNN) and a Support Vector Machine (SVM), were trained by the YEC dataset and performance evaluation experiments for both models were conducted across different databases: CeW, ZJU, Eyeblink, Talking Face (public datasets) and ABD. The impact of the proposed auxiliary models was evaluated and the CNN and SVM models were compared for the eye-state classification task. Promising results were obtained: 97.44% accuracy for the eye-state classification task on the CeW dataset and 92.63% F1-Score for the eye-blink detection task on the ABD dataset. •An intelligent system for people with mobility impairments.•Construction of a new eye-blink detection dataset discriminating volunteer blinks.•Implementation of Machine Learning techniques throughout the proposed pipeline.•Development of a high-performance and real-time system using a generic equipment.
ISSN:0957-4174
1873-6793
DOI:10.1016/j.eswa.2021.116073