A Model for Qur’anic Sign Language Recognition Based on Deep Learning Algorithms

Deaf and dumb Muslims cannot reach advanced levels of education due to the impact of obstruction on their educational attainment. This leads to their inability to learn, recite, and understand the meanings and interpretations of the Holy Qur’an as easily as ordinary people, which also prevents them...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Journal of sensors 2023-06, Vol.2023 (1)
Hauptverfasser: AbdElghfar, Hany A., Ahmed, Abdelmoty M., Alani, Ali A., AbdElaal, Hammam M., Bouallegue, Belgacem, Khattab, Mahmoud M., Tharwat, Gamal, Youness, Hassan A.
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Deaf and dumb Muslims cannot reach advanced levels of education due to the impact of obstruction on their educational attainment. This leads to their inability to learn, recite, and understand the meanings and interpretations of the Holy Qur’an as easily as ordinary people, which also prevents them from applying Islamic rituals such as prayer that require learning and reading the Holy Qur’an. In this paper, we propose a new model for Qur’anic sign language recognition based on convolutional neural networks through data preparation, preprocessing, feature extraction, and classification stages. The proposed model is aimed at recognizing the movements of the Arabic sign language by recognizing the hand gestures that refer to the dashed Qur’anic letters in order to help the deaf and dumb learn their Islamic rituals. The experiments have been conducted on a part of a large Arabic sign language dataset called ArSL2018, which represents the 14 dashed letters in the Holy Qur’an, so that this part contains only 24,137 images. The experimental results demonstrate that the proposed model performs better than the other existing models.
ISSN:1687-725X
1687-7268
DOI:10.1155/2023/9926245