Development of Al-Quran sign language classification based on convolutional neural network
Sign language is the main form of communication used by deaf people. Most of their activities, like; speaking, reading, and learning, involved sign languages. For reading Al-Quran, deaf people used Arabic sign language to read the ayah Al-Quran. For them, assistive technologies to aid them in the pr...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Tagungsbericht |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | 1 |
container_start_page | |
container_title | |
container_volume | 2347 |
creator | Nizam, Muhamad Zulhairi Mohd Saad, Shaharil Mad Suhaimi, Mohd Azlan Dzahir, Mohd Azuwan Mat Rahim, Shayfull Zamree Abd Dzahir, Mohd Azwarie Mat |
description | Sign language is the main form of communication used by deaf people. Most of their activities, like; speaking, reading, and learning, involved sign languages. For reading Al-Quran, deaf people used Arabic sign language to read the ayah Al-Quran. For them, assistive technologies to aid them in the process of learning and teaching of Al-Quran is very important, since the traditional method is very difficult and challenging. One of the reasons is that, traditionally, teachers need to know Arabic Sign Languages (ArSL) first in order to teach them to learn Al-Quran. Currently, assistive technology, it still considered to be relatively new and not well developed. In Malaysia and Indonesia, most of the developed technologies are mobile app, and web-based device, which both of them required continuous internet connection and only suitable for personal used. Previous research on assistive technologies can be classified into two types of devices. First, a sensor-based device, and second is the image-based device. Both of them have their advantages and disadvantages. For this project, the only image-based device is focused since the scope of this project is limited to supervised machine learning (Convolution neural network, CNN) that developed with accuracy above 80% in training and testing. The accuracy of CNN model can be explained based on the resulting pattern obtained from the training and testing. Next, the resulting pattern can be described as overfitting, underfitting, or optimum. This project shows that, with the appropriate tuning of hyperparameters based on the resulting pattern, the accuracy of the model can be improved. This CNN model is developed from scratch through trial and error tuning method since there are no formal techniques. Lastly, the CNN model is converted into a Tensorflow Lite format, which can ready to be integrated with mobile applications. |
doi_str_mv | 10.1063/5.0051490 |
format | Conference Proceeding |
fullrecord | <record><control><sourceid>proquest_scita</sourceid><recordid>TN_cdi_scitation_primary_10_1063_5_0051490</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2553602543</sourcerecordid><originalsourceid>FETCH-LOGICAL-p168t-d075d9ee9747cf05d92b87f3ffbff543fa43601334f920f4b02c4474fe8a59213</originalsourceid><addsrcrecordid>eNp9kN1LwzAUxYMoOKcP_gcB34TOm6-meRzzEwYiKIgvIW2T0Zk1tWkn_vdmOvDNp3O4_M7h3ovQOYEZgZxdiRmAIFzBAZoQIUgmc5IfogmA4hnl7PUYncS4BqBKymKC3q7t1vrQbWw74ODw3GdPY29aHJtVi71pV6NZWVx5E2PjmsoMTWhxaaKtcTJVaLfBj7uh8bi1KbqT4TP076foyBkf7dlep-jl9uZ5cZ8tH-8eFvNl1pG8GLIapKiVtUpyWTlInpaFdMy50jnBmTOc5UAY405RcLwEWnEuubOFEYoSNkUXv71dHz5GGwe9DmOf9omaCpGyNLUk6vKXilUz_Fyhu77ZmP5LE9C732mh97_7D96G_g_UXe3YN5flcCs</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>conference_proceeding</recordtype><pqid>2553602543</pqid></control><display><type>conference_proceeding</type><title>Development of Al-Quran sign language classification based on convolutional neural network</title><source>Scitation (AIP)</source><creator>Nizam, Muhamad Zulhairi Mohd ; Saad, Shaharil Mad ; Suhaimi, Mohd Azlan ; Dzahir, Mohd Azuwan Mat ; Rahim, Shayfull Zamree Abd ; Dzahir, Mohd Azwarie Mat</creator><contributor>Razak, Rafiza Abd ; Tahir, Muhammad Faheem Mohd ; Mortar, Nurul Aida Mohd ; Jamaludin, Liyana ; Abdullah, Mohd Mustafa Al Bakri ; Rahim, Shayfull Zamree Abd</contributor><creatorcontrib>Nizam, Muhamad Zulhairi Mohd ; Saad, Shaharil Mad ; Suhaimi, Mohd Azlan ; Dzahir, Mohd Azuwan Mat ; Rahim, Shayfull Zamree Abd ; Dzahir, Mohd Azwarie Mat ; Razak, Rafiza Abd ; Tahir, Muhammad Faheem Mohd ; Mortar, Nurul Aida Mohd ; Jamaludin, Liyana ; Abdullah, Mohd Mustafa Al Bakri ; Rahim, Shayfull Zamree Abd</creatorcontrib><description>Sign language is the main form of communication used by deaf people. Most of their activities, like; speaking, reading, and learning, involved sign languages. For reading Al-Quran, deaf people used Arabic sign language to read the ayah Al-Quran. For them, assistive technologies to aid them in the process of learning and teaching of Al-Quran is very important, since the traditional method is very difficult and challenging. One of the reasons is that, traditionally, teachers need to know Arabic Sign Languages (ArSL) first in order to teach them to learn Al-Quran. Currently, assistive technology, it still considered to be relatively new and not well developed. In Malaysia and Indonesia, most of the developed technologies are mobile app, and web-based device, which both of them required continuous internet connection and only suitable for personal used. Previous research on assistive technologies can be classified into two types of devices. First, a sensor-based device, and second is the image-based device. Both of them have their advantages and disadvantages. For this project, the only image-based device is focused since the scope of this project is limited to supervised machine learning (Convolution neural network, CNN) that developed with accuracy above 80% in training and testing. The accuracy of CNN model can be explained based on the resulting pattern obtained from the training and testing. Next, the resulting pattern can be described as overfitting, underfitting, or optimum. This project shows that, with the appropriate tuning of hyperparameters based on the resulting pattern, the accuracy of the model can be improved. This CNN model is developed from scratch through trial and error tuning method since there are no formal techniques. Lastly, the CNN model is converted into a Tensorflow Lite format, which can ready to be integrated with mobile applications.</description><identifier>ISSN: 0094-243X</identifier><identifier>EISSN: 1551-7616</identifier><identifier>DOI: 10.1063/5.0051490</identifier><identifier>CODEN: APCPCS</identifier><language>eng</language><publisher>Melville: American Institute of Physics</publisher><subject>Accuracy ; Adaptive technology ; Applications programs ; Artificial neural networks ; Deafness ; Languages ; Machine learning ; Mobile computing ; Model accuracy ; Neural networks ; Sign language ; Training ; Tuning</subject><ispartof>AIP conference proceedings, 2021, Vol.2347 (1)</ispartof><rights>Author(s)</rights><rights>2021 Author(s). Published by AIP Publishing.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://pubs.aip.org/acp/article-lookup/doi/10.1063/5.0051490$$EHTML$$P50$$Gscitation$$H</linktohtml><link.rule.ids>309,310,314,776,780,785,786,790,4498,23909,23910,25118,27901,27902,76126</link.rule.ids></links><search><contributor>Razak, Rafiza Abd</contributor><contributor>Tahir, Muhammad Faheem Mohd</contributor><contributor>Mortar, Nurul Aida Mohd</contributor><contributor>Jamaludin, Liyana</contributor><contributor>Abdullah, Mohd Mustafa Al Bakri</contributor><contributor>Rahim, Shayfull Zamree Abd</contributor><creatorcontrib>Nizam, Muhamad Zulhairi Mohd</creatorcontrib><creatorcontrib>Saad, Shaharil Mad</creatorcontrib><creatorcontrib>Suhaimi, Mohd Azlan</creatorcontrib><creatorcontrib>Dzahir, Mohd Azuwan Mat</creatorcontrib><creatorcontrib>Rahim, Shayfull Zamree Abd</creatorcontrib><creatorcontrib>Dzahir, Mohd Azwarie Mat</creatorcontrib><title>Development of Al-Quran sign language classification based on convolutional neural network</title><title>AIP conference proceedings</title><description>Sign language is the main form of communication used by deaf people. Most of their activities, like; speaking, reading, and learning, involved sign languages. For reading Al-Quran, deaf people used Arabic sign language to read the ayah Al-Quran. For them, assistive technologies to aid them in the process of learning and teaching of Al-Quran is very important, since the traditional method is very difficult and challenging. One of the reasons is that, traditionally, teachers need to know Arabic Sign Languages (ArSL) first in order to teach them to learn Al-Quran. Currently, assistive technology, it still considered to be relatively new and not well developed. In Malaysia and Indonesia, most of the developed technologies are mobile app, and web-based device, which both of them required continuous internet connection and only suitable for personal used. Previous research on assistive technologies can be classified into two types of devices. First, a sensor-based device, and second is the image-based device. Both of them have their advantages and disadvantages. For this project, the only image-based device is focused since the scope of this project is limited to supervised machine learning (Convolution neural network, CNN) that developed with accuracy above 80% in training and testing. The accuracy of CNN model can be explained based on the resulting pattern obtained from the training and testing. Next, the resulting pattern can be described as overfitting, underfitting, or optimum. This project shows that, with the appropriate tuning of hyperparameters based on the resulting pattern, the accuracy of the model can be improved. This CNN model is developed from scratch through trial and error tuning method since there are no formal techniques. Lastly, the CNN model is converted into a Tensorflow Lite format, which can ready to be integrated with mobile applications.</description><subject>Accuracy</subject><subject>Adaptive technology</subject><subject>Applications programs</subject><subject>Artificial neural networks</subject><subject>Deafness</subject><subject>Languages</subject><subject>Machine learning</subject><subject>Mobile computing</subject><subject>Model accuracy</subject><subject>Neural networks</subject><subject>Sign language</subject><subject>Training</subject><subject>Tuning</subject><issn>0094-243X</issn><issn>1551-7616</issn><fulltext>true</fulltext><rsrctype>conference_proceeding</rsrctype><creationdate>2021</creationdate><recordtype>conference_proceeding</recordtype><recordid>eNp9kN1LwzAUxYMoOKcP_gcB34TOm6-meRzzEwYiKIgvIW2T0Zk1tWkn_vdmOvDNp3O4_M7h3ovQOYEZgZxdiRmAIFzBAZoQIUgmc5IfogmA4hnl7PUYncS4BqBKymKC3q7t1vrQbWw74ODw3GdPY29aHJtVi71pV6NZWVx5E2PjmsoMTWhxaaKtcTJVaLfBj7uh8bi1KbqT4TP076foyBkf7dlep-jl9uZ5cZ8tH-8eFvNl1pG8GLIapKiVtUpyWTlInpaFdMy50jnBmTOc5UAY405RcLwEWnEuubOFEYoSNkUXv71dHz5GGwe9DmOf9omaCpGyNLUk6vKXilUz_Fyhu77ZmP5LE9C732mh97_7D96G_g_UXe3YN5flcCs</recordid><startdate>20210721</startdate><enddate>20210721</enddate><creator>Nizam, Muhamad Zulhairi Mohd</creator><creator>Saad, Shaharil Mad</creator><creator>Suhaimi, Mohd Azlan</creator><creator>Dzahir, Mohd Azuwan Mat</creator><creator>Rahim, Shayfull Zamree Abd</creator><creator>Dzahir, Mohd Azwarie Mat</creator><general>American Institute of Physics</general><scope>8FD</scope><scope>H8D</scope><scope>L7M</scope></search><sort><creationdate>20210721</creationdate><title>Development of Al-Quran sign language classification based on convolutional neural network</title><author>Nizam, Muhamad Zulhairi Mohd ; Saad, Shaharil Mad ; Suhaimi, Mohd Azlan ; Dzahir, Mohd Azuwan Mat ; Rahim, Shayfull Zamree Abd ; Dzahir, Mohd Azwarie Mat</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-p168t-d075d9ee9747cf05d92b87f3ffbff543fa43601334f920f4b02c4474fe8a59213</frbrgroupid><rsrctype>conference_proceedings</rsrctype><prefilter>conference_proceedings</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Accuracy</topic><topic>Adaptive technology</topic><topic>Applications programs</topic><topic>Artificial neural networks</topic><topic>Deafness</topic><topic>Languages</topic><topic>Machine learning</topic><topic>Mobile computing</topic><topic>Model accuracy</topic><topic>Neural networks</topic><topic>Sign language</topic><topic>Training</topic><topic>Tuning</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Nizam, Muhamad Zulhairi Mohd</creatorcontrib><creatorcontrib>Saad, Shaharil Mad</creatorcontrib><creatorcontrib>Suhaimi, Mohd Azlan</creatorcontrib><creatorcontrib>Dzahir, Mohd Azuwan Mat</creatorcontrib><creatorcontrib>Rahim, Shayfull Zamree Abd</creatorcontrib><creatorcontrib>Dzahir, Mohd Azwarie Mat</creatorcontrib><collection>Technology Research Database</collection><collection>Aerospace Database</collection><collection>Advanced Technologies Database with Aerospace</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Nizam, Muhamad Zulhairi Mohd</au><au>Saad, Shaharil Mad</au><au>Suhaimi, Mohd Azlan</au><au>Dzahir, Mohd Azuwan Mat</au><au>Rahim, Shayfull Zamree Abd</au><au>Dzahir, Mohd Azwarie Mat</au><au>Razak, Rafiza Abd</au><au>Tahir, Muhammad Faheem Mohd</au><au>Mortar, Nurul Aida Mohd</au><au>Jamaludin, Liyana</au><au>Abdullah, Mohd Mustafa Al Bakri</au><au>Rahim, Shayfull Zamree Abd</au><format>book</format><genre>proceeding</genre><ristype>CONF</ristype><atitle>Development of Al-Quran sign language classification based on convolutional neural network</atitle><btitle>AIP conference proceedings</btitle><date>2021-07-21</date><risdate>2021</risdate><volume>2347</volume><issue>1</issue><issn>0094-243X</issn><eissn>1551-7616</eissn><coden>APCPCS</coden><abstract>Sign language is the main form of communication used by deaf people. Most of their activities, like; speaking, reading, and learning, involved sign languages. For reading Al-Quran, deaf people used Arabic sign language to read the ayah Al-Quran. For them, assistive technologies to aid them in the process of learning and teaching of Al-Quran is very important, since the traditional method is very difficult and challenging. One of the reasons is that, traditionally, teachers need to know Arabic Sign Languages (ArSL) first in order to teach them to learn Al-Quran. Currently, assistive technology, it still considered to be relatively new and not well developed. In Malaysia and Indonesia, most of the developed technologies are mobile app, and web-based device, which both of them required continuous internet connection and only suitable for personal used. Previous research on assistive technologies can be classified into two types of devices. First, a sensor-based device, and second is the image-based device. Both of them have their advantages and disadvantages. For this project, the only image-based device is focused since the scope of this project is limited to supervised machine learning (Convolution neural network, CNN) that developed with accuracy above 80% in training and testing. The accuracy of CNN model can be explained based on the resulting pattern obtained from the training and testing. Next, the resulting pattern can be described as overfitting, underfitting, or optimum. This project shows that, with the appropriate tuning of hyperparameters based on the resulting pattern, the accuracy of the model can be improved. This CNN model is developed from scratch through trial and error tuning method since there are no formal techniques. Lastly, the CNN model is converted into a Tensorflow Lite format, which can ready to be integrated with mobile applications.</abstract><cop>Melville</cop><pub>American Institute of Physics</pub><doi>10.1063/5.0051490</doi><tpages>8</tpages></addata></record> |
fulltext | fulltext |
identifier | ISSN: 0094-243X |
ispartof | AIP conference proceedings, 2021, Vol.2347 (1) |
issn | 0094-243X 1551-7616 |
language | eng |
recordid | cdi_scitation_primary_10_1063_5_0051490 |
source | Scitation (AIP) |
subjects | Accuracy Adaptive technology Applications programs Artificial neural networks Deafness Languages Machine learning Mobile computing Model accuracy Neural networks Sign language Training Tuning |
title | Development of Al-Quran sign language classification based on convolutional neural network |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-11T15%3A55%3A59IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_scita&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=proceeding&rft.atitle=Development%20of%20Al-Quran%20sign%20language%20classification%20based%20on%20convolutional%20neural%20network&rft.btitle=AIP%20conference%20proceedings&rft.au=Nizam,%20Muhamad%20Zulhairi%20Mohd&rft.date=2021-07-21&rft.volume=2347&rft.issue=1&rft.issn=0094-243X&rft.eissn=1551-7616&rft.coden=APCPCS&rft_id=info:doi/10.1063/5.0051490&rft_dat=%3Cproquest_scita%3E2553602543%3C/proquest_scita%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2553602543&rft_id=info:pmid/&rfr_iscdi=true |