Intelligent real-time Arabic sign language classification using attention-based inception and BiLSTM
•Bio-Inspired novel attention-based inception architecture is proposed that can adapt to different types of spatial contexts using convolution filters of different sizes. The characteristics of each dataset are unique, hence the attention mechanism helps focus on those features to improve classifica...
Gespeichert in:
Veröffentlicht in: | Computers & electrical engineering 2021-10, Vol.95, p.107395, Article 107395 |
---|---|
Hauptverfasser: | , , , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | •Bio-Inspired novel attention-based inception architecture is proposed that can adapt to different types of spatial contexts using convolution filters of different sizes. The characteristics of each dataset are unique, hence the attention mechanism helps focus on those features to improve classification performance.•The shallow inception model is designed with a two-layer attention mechanism with fewer layers but with a large number of convolution filters that can address the overfitting problem caused by small dataset sizes.•LSTM-based recurrent neural network (RNN) module is proposed to extract temporal features after the inception module is applied.•The proposed model is lightweight with fewer parameters and has less processing time.•The proposed model achieves good performance for both dynamic and static signs and gestures.
Bio-inspired deep learning models have revolutionized sign language classification, achieving extraordinary accuracy and human-like video understanding. Recognition and classification of sign language videos in real-time are challenging because the duration and speed of each sign vary for different subjects, the background of videos is dynamic in most cases, and the classification result should be produced in real-time. This study proposes a model based on a convolution neural network (CNN) Inception model with an attention mechanism for extracting spatial features and Bi-LSTM (long short-term memory) for temporal feature extraction. The proposed model is tested on datasets with highly variable characteristics such as different clothing, variable lighting, and variable distance from the camera. Real-time classification achieves significant early detections while achieving performance comparable to the offline operation. The proposed model has fewer parameters, fewer deep learning layers, and requires significantly less processing time than state-of-the-art models.
The Inception model with an attention mechanism with two attention blocks [Display omitted] |
---|---|
ISSN: | 0045-7906 1879-0755 |
DOI: | 10.1016/j.compeleceng.2021.107395 |