An Intelligent Kurdish Sign Language Recognition System Based on Tuned CNN
Hearing-impaired individuals have both hearing and speech disabilities. Therefore, they use a special language that involves visual gestures—known as “sign language”—for communicating ideas and emotions. Recognizing the gestures contained in sign language enables deaf people communicate more effecti...
Gespeichert in:
Veröffentlicht in: | SN computer science 2022-09, Vol.3 (6), p.481, Article 481 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Hearing-impaired individuals have both hearing and speech disabilities. Therefore, they use a special language that involves visual gestures—known as “sign language”—for communicating ideas and emotions. Recognizing the gestures contained in sign language enables deaf people communicate more effectively with their interlocutor. It also helps people without such disabilities understand and identify those signs, thereby enriching the communication. However, designing a system that can automatically identify the signs of Kurdish sign language is a challenging task, especially for Kurdish sign language. This is attributable to the unavailability of a dataset and lack of standardized sign language. In this study, we investigate the problem by collecting a dataset of seven static signs and designing a model for sign recognition. The dataset consists of 3690 high-resolution images taken mostly from college students. To develop the classifier, a four-layer convolutional neural network model with a filter size of 5 × 5 was designed. To compare the model performance, two other pre-trained networks, namely MobileNetV2 and VGG16, were trained and fine-tuned using the same dataset. After a variety of hyperparameter fine-tuning, the proposed approach achieved the same outcome as the two pre-trained networks, with an accuracy of 99.75%. That is, the model identified 396 of the 397 images in the test set. In addition, we performed an external test using 58 images of various signs, and the model approximately classified all the images correctly. This demonstrates that our approach achieved an outstanding result, which can be considered a first in the field. |
---|---|
ISSN: | 2661-8907 2662-995X 2661-8907 |
DOI: | 10.1007/s42979-022-01394-5 |