A Model for Qur’anic Sign Language Recognition Based on Deep Learning Algorithms
Deaf and dumb Muslims cannot reach advanced levels of education due to the impact of obstruction on their educational attainment. This leads to their inability to learn, recite, and understand the meanings and interpretations of the Holy Qur’an as easily as ordinary people, which also prevents them...
Gespeichert in:
Veröffentlicht in: | Journal of sensors 2023-06, Vol.2023 (1) |
---|---|
Hauptverfasser: | , , , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | 1 |
container_start_page | |
container_title | Journal of sensors |
container_volume | 2023 |
creator | AbdElghfar, Hany A. Ahmed, Abdelmoty M. Alani, Ali A. AbdElaal, Hammam M. Bouallegue, Belgacem Khattab, Mahmoud M. Tharwat, Gamal Youness, Hassan A. |
description | Deaf and dumb Muslims cannot reach advanced levels of education due to the impact of obstruction on their educational attainment. This leads to their inability to learn, recite, and understand the meanings and interpretations of the Holy Qur’an as easily as ordinary people, which also prevents them from applying Islamic rituals such as prayer that require learning and reading the Holy Qur’an. In this paper, we propose a new model for Qur’anic sign language recognition based on convolutional neural networks through data preparation, preprocessing, feature extraction, and classification stages. The proposed model is aimed at recognizing the movements of the Arabic sign language by recognizing the hand gestures that refer to the dashed Qur’anic letters in order to help the deaf and dumb learn their Islamic rituals. The experiments have been conducted on a part of a large Arabic sign language dataset called ArSL2018, which represents the 14 dashed letters in the Holy Qur’an, so that this part contains only 24,137 images. The experimental results demonstrate that the proposed model performs better than the other existing models. |
doi_str_mv | 10.1155/2023/9926245 |
format | Article |
fullrecord | <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2829305719</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2829305719</sourcerecordid><originalsourceid>FETCH-LOGICAL-c404t-7e3ef43f51ca0eb1756fbdfb96bde30620562a99bd5b56991ff38699e0a587a33</originalsourceid><addsrcrecordid>eNp9kMtKAzEYhYMoWKs7HyDgUsfmMslMlvUuVMSq4G7IzPyZprRJTWYQd76Gr-eTOKXFpatzFh_nwIfQMSXnlAoxYoTxkVJMslTsoAGVeZZkTOa7f1287aODGOeESJ5xPkDTMX7wNSyw8QE_deHn61s7W-Fn2zg80a7pdAN4CpVvnG2td_hCR6hxX64AVngCOjjrGjxeND7YdraMh2jP6EWEo20O0evN9cvlXTJ5vL2_HE-SKiVpm2TAwaTcCFppAiXNhDRlbUolyxo4kYwIybRSZS1KIZWixvC8TyBa5JnmfIhONrur4N87iG0x911w_WXBcqY4ERlVPXW2oargYwxgilWwSx0-C0qKtbViba3YWuvx0w0-s67WH_Z_-hcOC2v9</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2829305719</pqid></control><display><type>article</type><title>A Model for Qur’anic Sign Language Recognition Based on Deep Learning Algorithms</title><source>EZB-FREE-00999 freely available EZB journals</source><source>Wiley Online Library (Open Access Collection)</source><source>Alma/SFX Local Collection</source><creator>AbdElghfar, Hany A. ; Ahmed, Abdelmoty M. ; Alani, Ali A. ; AbdElaal, Hammam M. ; Bouallegue, Belgacem ; Khattab, Mahmoud M. ; Tharwat, Gamal ; Youness, Hassan A.</creator><contributor>Pennazza, Giorgio ; Giorgio Pennazza</contributor><creatorcontrib>AbdElghfar, Hany A. ; Ahmed, Abdelmoty M. ; Alani, Ali A. ; AbdElaal, Hammam M. ; Bouallegue, Belgacem ; Khattab, Mahmoud M. ; Tharwat, Gamal ; Youness, Hassan A. ; Pennazza, Giorgio ; Giorgio Pennazza</creatorcontrib><description>Deaf and dumb Muslims cannot reach advanced levels of education due to the impact of obstruction on their educational attainment. This leads to their inability to learn, recite, and understand the meanings and interpretations of the Holy Qur’an as easily as ordinary people, which also prevents them from applying Islamic rituals such as prayer that require learning and reading the Holy Qur’an. In this paper, we propose a new model for Qur’anic sign language recognition based on convolutional neural networks through data preparation, preprocessing, feature extraction, and classification stages. The proposed model is aimed at recognizing the movements of the Arabic sign language by recognizing the hand gestures that refer to the dashed Qur’anic letters in order to help the deaf and dumb learn their Islamic rituals. The experiments have been conducted on a part of a large Arabic sign language dataset called ArSL2018, which represents the 14 dashed letters in the Holy Qur’an, so that this part contains only 24,137 images. The experimental results demonstrate that the proposed model performs better than the other existing models.</description><identifier>ISSN: 1687-725X</identifier><identifier>EISSN: 1687-7268</identifier><identifier>DOI: 10.1155/2023/9926245</identifier><language>eng</language><publisher>New York: Hindawi</publisher><subject>Accuracy ; Algorithms ; Artificial neural networks ; Classification ; Communication ; Datasets ; Deafness ; Deep learning ; Feature extraction ; Machine learning ; Neural networks ; Sign language ; Support vector machines</subject><ispartof>Journal of sensors, 2023-06, Vol.2023 (1)</ispartof><rights>Copyright © 2023 Hany A. AbdElghfar et al.</rights><rights>Copyright © 2023 Hany A. AbdElghfar et al. This is an open access article distributed under the Creative Commons Attribution License (the “License”), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License. https://creativecommons.org/licenses/by/4.0</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c404t-7e3ef43f51ca0eb1756fbdfb96bde30620562a99bd5b56991ff38699e0a587a33</citedby><cites>FETCH-LOGICAL-c404t-7e3ef43f51ca0eb1756fbdfb96bde30620562a99bd5b56991ff38699e0a587a33</cites><orcidid>0000-0001-7248-871X ; 0000-0001-7292-6345 ; 0000-0002-2672-132X ; 0000-0001-5034-6489 ; 0000-0002-3379-7314</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,780,784,27924,27925</link.rule.ids></links><search><contributor>Pennazza, Giorgio</contributor><contributor>Giorgio Pennazza</contributor><creatorcontrib>AbdElghfar, Hany A.</creatorcontrib><creatorcontrib>Ahmed, Abdelmoty M.</creatorcontrib><creatorcontrib>Alani, Ali A.</creatorcontrib><creatorcontrib>AbdElaal, Hammam M.</creatorcontrib><creatorcontrib>Bouallegue, Belgacem</creatorcontrib><creatorcontrib>Khattab, Mahmoud M.</creatorcontrib><creatorcontrib>Tharwat, Gamal</creatorcontrib><creatorcontrib>Youness, Hassan A.</creatorcontrib><title>A Model for Qur’anic Sign Language Recognition Based on Deep Learning Algorithms</title><title>Journal of sensors</title><description>Deaf and dumb Muslims cannot reach advanced levels of education due to the impact of obstruction on their educational attainment. This leads to their inability to learn, recite, and understand the meanings and interpretations of the Holy Qur’an as easily as ordinary people, which also prevents them from applying Islamic rituals such as prayer that require learning and reading the Holy Qur’an. In this paper, we propose a new model for Qur’anic sign language recognition based on convolutional neural networks through data preparation, preprocessing, feature extraction, and classification stages. The proposed model is aimed at recognizing the movements of the Arabic sign language by recognizing the hand gestures that refer to the dashed Qur’anic letters in order to help the deaf and dumb learn their Islamic rituals. The experiments have been conducted on a part of a large Arabic sign language dataset called ArSL2018, which represents the 14 dashed letters in the Holy Qur’an, so that this part contains only 24,137 images. The experimental results demonstrate that the proposed model performs better than the other existing models.</description><subject>Accuracy</subject><subject>Algorithms</subject><subject>Artificial neural networks</subject><subject>Classification</subject><subject>Communication</subject><subject>Datasets</subject><subject>Deafness</subject><subject>Deep learning</subject><subject>Feature extraction</subject><subject>Machine learning</subject><subject>Neural networks</subject><subject>Sign language</subject><subject>Support vector machines</subject><issn>1687-725X</issn><issn>1687-7268</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>RHX</sourceid><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><sourceid>GNUQQ</sourceid><recordid>eNp9kMtKAzEYhYMoWKs7HyDgUsfmMslMlvUuVMSq4G7IzPyZprRJTWYQd76Gr-eTOKXFpatzFh_nwIfQMSXnlAoxYoTxkVJMslTsoAGVeZZkTOa7f1287aODGOeESJ5xPkDTMX7wNSyw8QE_deHn61s7W-Fn2zg80a7pdAN4CpVvnG2td_hCR6hxX64AVngCOjjrGjxeND7YdraMh2jP6EWEo20O0evN9cvlXTJ5vL2_HE-SKiVpm2TAwaTcCFppAiXNhDRlbUolyxo4kYwIybRSZS1KIZWixvC8TyBa5JnmfIhONrur4N87iG0x911w_WXBcqY4ERlVPXW2oargYwxgilWwSx0-C0qKtbViba3YWuvx0w0-s67WH_Z_-hcOC2v9</recordid><startdate>20230615</startdate><enddate>20230615</enddate><creator>AbdElghfar, Hany A.</creator><creator>Ahmed, Abdelmoty M.</creator><creator>Alani, Ali A.</creator><creator>AbdElaal, Hammam M.</creator><creator>Bouallegue, Belgacem</creator><creator>Khattab, Mahmoud M.</creator><creator>Tharwat, Gamal</creator><creator>Youness, Hassan A.</creator><general>Hindawi</general><general>Hindawi Limited</general><scope>RHU</scope><scope>RHW</scope><scope>RHX</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>3V.</scope><scope>7SP</scope><scope>7U5</scope><scope>7XB</scope><scope>8AL</scope><scope>8FD</scope><scope>8FE</scope><scope>8FG</scope><scope>8FK</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>CWDGH</scope><scope>D1I</scope><scope>DWQXO</scope><scope>GNUQQ</scope><scope>HCIFZ</scope><scope>JQ2</scope><scope>K7-</scope><scope>KB.</scope><scope>L6V</scope><scope>L7M</scope><scope>M0N</scope><scope>M7S</scope><scope>P5Z</scope><scope>P62</scope><scope>PDBOC</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope><scope>Q9U</scope><orcidid>https://orcid.org/0000-0001-7248-871X</orcidid><orcidid>https://orcid.org/0000-0001-7292-6345</orcidid><orcidid>https://orcid.org/0000-0002-2672-132X</orcidid><orcidid>https://orcid.org/0000-0001-5034-6489</orcidid><orcidid>https://orcid.org/0000-0002-3379-7314</orcidid></search><sort><creationdate>20230615</creationdate><title>A Model for Qur’anic Sign Language Recognition Based on Deep Learning Algorithms</title><author>AbdElghfar, Hany A. ; Ahmed, Abdelmoty M. ; Alani, Ali A. ; AbdElaal, Hammam M. ; Bouallegue, Belgacem ; Khattab, Mahmoud M. ; Tharwat, Gamal ; Youness, Hassan A.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c404t-7e3ef43f51ca0eb1756fbdfb96bde30620562a99bd5b56991ff38699e0a587a33</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Accuracy</topic><topic>Algorithms</topic><topic>Artificial neural networks</topic><topic>Classification</topic><topic>Communication</topic><topic>Datasets</topic><topic>Deafness</topic><topic>Deep learning</topic><topic>Feature extraction</topic><topic>Machine learning</topic><topic>Neural networks</topic><topic>Sign language</topic><topic>Support vector machines</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>AbdElghfar, Hany A.</creatorcontrib><creatorcontrib>Ahmed, Abdelmoty M.</creatorcontrib><creatorcontrib>Alani, Ali A.</creatorcontrib><creatorcontrib>AbdElaal, Hammam M.</creatorcontrib><creatorcontrib>Bouallegue, Belgacem</creatorcontrib><creatorcontrib>Khattab, Mahmoud M.</creatorcontrib><creatorcontrib>Tharwat, Gamal</creatorcontrib><creatorcontrib>Youness, Hassan A.</creatorcontrib><collection>Hindawi Publishing Complete</collection><collection>Hindawi Publishing Subscription Journals</collection><collection>Hindawi Publishing Open Access Journals</collection><collection>CrossRef</collection><collection>ProQuest Central (Corporate)</collection><collection>Electronics & Communications Abstracts</collection><collection>Solid State and Superconductivity Abstracts</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>Computing Database (Alumni Edition)</collection><collection>Technology Research Database</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Central (Alumni) (purchase pre-March 2016)</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>Advanced Technologies & Aerospace Collection</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>Middle East & Africa Database</collection><collection>ProQuest Materials Science Collection</collection><collection>ProQuest Central Korea</collection><collection>ProQuest Central Student</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Computer Science Collection</collection><collection>Computer Science Database</collection><collection>Materials Science Database</collection><collection>ProQuest Engineering Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computing Database</collection><collection>Engineering Database</collection><collection>Advanced Technologies & Aerospace Database</collection><collection>ProQuest Advanced Technologies & Aerospace Collection</collection><collection>Materials Science Collection</collection><collection>Access via ProQuest (Open Access)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection><collection>ProQuest Central Basic</collection><jtitle>Journal of sensors</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>AbdElghfar, Hany A.</au><au>Ahmed, Abdelmoty M.</au><au>Alani, Ali A.</au><au>AbdElaal, Hammam M.</au><au>Bouallegue, Belgacem</au><au>Khattab, Mahmoud M.</au><au>Tharwat, Gamal</au><au>Youness, Hassan A.</au><au>Pennazza, Giorgio</au><au>Giorgio Pennazza</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>A Model for Qur’anic Sign Language Recognition Based on Deep Learning Algorithms</atitle><jtitle>Journal of sensors</jtitle><date>2023-06-15</date><risdate>2023</risdate><volume>2023</volume><issue>1</issue><issn>1687-725X</issn><eissn>1687-7268</eissn><abstract>Deaf and dumb Muslims cannot reach advanced levels of education due to the impact of obstruction on their educational attainment. This leads to their inability to learn, recite, and understand the meanings and interpretations of the Holy Qur’an as easily as ordinary people, which also prevents them from applying Islamic rituals such as prayer that require learning and reading the Holy Qur’an. In this paper, we propose a new model for Qur’anic sign language recognition based on convolutional neural networks through data preparation, preprocessing, feature extraction, and classification stages. The proposed model is aimed at recognizing the movements of the Arabic sign language by recognizing the hand gestures that refer to the dashed Qur’anic letters in order to help the deaf and dumb learn their Islamic rituals. The experiments have been conducted on a part of a large Arabic sign language dataset called ArSL2018, which represents the 14 dashed letters in the Holy Qur’an, so that this part contains only 24,137 images. The experimental results demonstrate that the proposed model performs better than the other existing models.</abstract><cop>New York</cop><pub>Hindawi</pub><doi>10.1155/2023/9926245</doi><orcidid>https://orcid.org/0000-0001-7248-871X</orcidid><orcidid>https://orcid.org/0000-0001-7292-6345</orcidid><orcidid>https://orcid.org/0000-0002-2672-132X</orcidid><orcidid>https://orcid.org/0000-0001-5034-6489</orcidid><orcidid>https://orcid.org/0000-0002-3379-7314</orcidid><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | ISSN: 1687-725X |
ispartof | Journal of sensors, 2023-06, Vol.2023 (1) |
issn | 1687-725X 1687-7268 |
language | eng |
recordid | cdi_proquest_journals_2829305719 |
source | EZB-FREE-00999 freely available EZB journals; Wiley Online Library (Open Access Collection); Alma/SFX Local Collection |
subjects | Accuracy Algorithms Artificial neural networks Classification Communication Datasets Deafness Deep learning Feature extraction Machine learning Neural networks Sign language Support vector machines |
title | A Model for Qur’anic Sign Language Recognition Based on Deep Learning Algorithms |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-30T22%3A06%3A49IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=A%20Model%20for%20Qur%E2%80%99anic%20Sign%20Language%20Recognition%20Based%20on%20Deep%20Learning%20Algorithms&rft.jtitle=Journal%20of%20sensors&rft.au=AbdElghfar,%20Hany%20A.&rft.date=2023-06-15&rft.volume=2023&rft.issue=1&rft.issn=1687-725X&rft.eissn=1687-7268&rft_id=info:doi/10.1155/2023/9926245&rft_dat=%3Cproquest_cross%3E2829305719%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2829305719&rft_id=info:pmid/&rfr_iscdi=true |