Design of Arabic Sign Language Recognition Model

Deaf people are using sign language for communication, and it is a combination of gestures, movements, postures, and facial expressions that correspond to alphabets and words in spoken languages. The proposed Arabic sign language recognition model helps deaf and hard hearing people communicate effec...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Al-Barham, Muhammad, Jamal, Ahmad, Al-Yaman, Musa
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Al-Barham, Muhammad
Jamal, Ahmad
Al-Yaman, Musa
description Deaf people are using sign language for communication, and it is a combination of gestures, movements, postures, and facial expressions that correspond to alphabets and words in spoken languages. The proposed Arabic sign language recognition model helps deaf and hard hearing people communicate effectively with ordinary people. The recognition has four stages of converting the alphabet into letters as follows: Image Loading stage, which loads the images of Arabic sign language alphabets that were used later to train and test the model, a pre-processing stage which applies image processing techniques such as normalization, Image augmentation, resizing, and filtering to extract the features which are necessary to accomplish the recognition perfectly, a training stage which is achieved by deep learning techniques like CNN, a testing stage which demonstrates how effectively the model performs for images did not see it before, and the model was built and tested mainly using PyTorch library. The model is tested on ArASL2018, consisting of 54,000 images for 32 alphabet signs gathered from 40 signers, and the dataset has two sets: training dataset and testing dataset. We had to ensure that the system is reliable in terms of accuracy, time, and flexibility of use explained in detail in this report. Finally, the future work will be a model that converts Arabic sign language into Arabic text.
doi_str_mv 10.48550/arxiv.2301.02693
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2301_02693</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2301_02693</sourcerecordid><originalsourceid>FETCH-LOGICAL-a673-ad356be64da5f679e833d1682096fb806f092e4983540ca1a463d7c94ba95a313</originalsourceid><addsrcrecordid>eNotzkFuwjAQhWFvWFS0B-gKXyDpOGNP7CWitFQKQgL20SS2I0s0QYFWcPsK6Orp3zx9QrwqyLU1Bt54vKTfvEBQORTk8EnAezilrpdDlPORm9TK3S0r7rsf7oLchnbo-nROQy_Xgw-HZzGJfDiFl_-div3Hcr9YZdXm82sxrzKmEjP2aKgJpD2bSKULFtErsgU4io0FiuCKoJ1Fo6FlxZrQl63TDTvDqHAqZo_bO7k-jumbx2t9o9d3Ov4BjRg8gg</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Design of Arabic Sign Language Recognition Model</title><source>arXiv.org</source><creator>Al-Barham, Muhammad ; Jamal, Ahmad ; Al-Yaman, Musa</creator><creatorcontrib>Al-Barham, Muhammad ; Jamal, Ahmad ; Al-Yaman, Musa</creatorcontrib><description>Deaf people are using sign language for communication, and it is a combination of gestures, movements, postures, and facial expressions that correspond to alphabets and words in spoken languages. The proposed Arabic sign language recognition model helps deaf and hard hearing people communicate effectively with ordinary people. The recognition has four stages of converting the alphabet into letters as follows: Image Loading stage, which loads the images of Arabic sign language alphabets that were used later to train and test the model, a pre-processing stage which applies image processing techniques such as normalization, Image augmentation, resizing, and filtering to extract the features which are necessary to accomplish the recognition perfectly, a training stage which is achieved by deep learning techniques like CNN, a testing stage which demonstrates how effectively the model performs for images did not see it before, and the model was built and tested mainly using PyTorch library. The model is tested on ArASL2018, consisting of 54,000 images for 32 alphabet signs gathered from 40 signers, and the dataset has two sets: training dataset and testing dataset. We had to ensure that the system is reliable in terms of accuracy, time, and flexibility of use explained in detail in this report. Finally, the future work will be a model that converts Arabic sign language into Arabic text.</description><identifier>DOI: 10.48550/arxiv.2301.02693</identifier><language>eng</language><subject>Computer Science - Computer Vision and Pattern Recognition</subject><creationdate>2023-01</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,777,882</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2301.02693$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2301.02693$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Al-Barham, Muhammad</creatorcontrib><creatorcontrib>Jamal, Ahmad</creatorcontrib><creatorcontrib>Al-Yaman, Musa</creatorcontrib><title>Design of Arabic Sign Language Recognition Model</title><description>Deaf people are using sign language for communication, and it is a combination of gestures, movements, postures, and facial expressions that correspond to alphabets and words in spoken languages. The proposed Arabic sign language recognition model helps deaf and hard hearing people communicate effectively with ordinary people. The recognition has four stages of converting the alphabet into letters as follows: Image Loading stage, which loads the images of Arabic sign language alphabets that were used later to train and test the model, a pre-processing stage which applies image processing techniques such as normalization, Image augmentation, resizing, and filtering to extract the features which are necessary to accomplish the recognition perfectly, a training stage which is achieved by deep learning techniques like CNN, a testing stage which demonstrates how effectively the model performs for images did not see it before, and the model was built and tested mainly using PyTorch library. The model is tested on ArASL2018, consisting of 54,000 images for 32 alphabet signs gathered from 40 signers, and the dataset has two sets: training dataset and testing dataset. We had to ensure that the system is reliable in terms of accuracy, time, and flexibility of use explained in detail in this report. Finally, the future work will be a model that converts Arabic sign language into Arabic text.</description><subject>Computer Science - Computer Vision and Pattern Recognition</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotzkFuwjAQhWFvWFS0B-gKXyDpOGNP7CWitFQKQgL20SS2I0s0QYFWcPsK6Orp3zx9QrwqyLU1Bt54vKTfvEBQORTk8EnAezilrpdDlPORm9TK3S0r7rsf7oLchnbo-nROQy_Xgw-HZzGJfDiFl_-div3Hcr9YZdXm82sxrzKmEjP2aKgJpD2bSKULFtErsgU4io0FiuCKoJ1Fo6FlxZrQl63TDTvDqHAqZo_bO7k-jumbx2t9o9d3Ov4BjRg8gg</recordid><startdate>20230106</startdate><enddate>20230106</enddate><creator>Al-Barham, Muhammad</creator><creator>Jamal, Ahmad</creator><creator>Al-Yaman, Musa</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20230106</creationdate><title>Design of Arabic Sign Language Recognition Model</title><author>Al-Barham, Muhammad ; Jamal, Ahmad ; Al-Yaman, Musa</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a673-ad356be64da5f679e833d1682096fb806f092e4983540ca1a463d7c94ba95a313</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Computer Science - Computer Vision and Pattern Recognition</topic><toplevel>online_resources</toplevel><creatorcontrib>Al-Barham, Muhammad</creatorcontrib><creatorcontrib>Jamal, Ahmad</creatorcontrib><creatorcontrib>Al-Yaman, Musa</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Al-Barham, Muhammad</au><au>Jamal, Ahmad</au><au>Al-Yaman, Musa</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Design of Arabic Sign Language Recognition Model</atitle><date>2023-01-06</date><risdate>2023</risdate><abstract>Deaf people are using sign language for communication, and it is a combination of gestures, movements, postures, and facial expressions that correspond to alphabets and words in spoken languages. The proposed Arabic sign language recognition model helps deaf and hard hearing people communicate effectively with ordinary people. The recognition has four stages of converting the alphabet into letters as follows: Image Loading stage, which loads the images of Arabic sign language alphabets that were used later to train and test the model, a pre-processing stage which applies image processing techniques such as normalization, Image augmentation, resizing, and filtering to extract the features which are necessary to accomplish the recognition perfectly, a training stage which is achieved by deep learning techniques like CNN, a testing stage which demonstrates how effectively the model performs for images did not see it before, and the model was built and tested mainly using PyTorch library. The model is tested on ArASL2018, consisting of 54,000 images for 32 alphabet signs gathered from 40 signers, and the dataset has two sets: training dataset and testing dataset. We had to ensure that the system is reliable in terms of accuracy, time, and flexibility of use explained in detail in this report. Finally, the future work will be a model that converts Arabic sign language into Arabic text.</abstract><doi>10.48550/arxiv.2301.02693</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2301.02693
ispartof
issn
language eng
recordid cdi_arxiv_primary_2301_02693
source arXiv.org
subjects Computer Science - Computer Vision and Pattern Recognition
title Design of Arabic Sign Language Recognition Model
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-18T14%3A39%3A24IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Design%20of%20Arabic%20Sign%20Language%20Recognition%20Model&rft.au=Al-Barham,%20Muhammad&rft.date=2023-01-06&rft_id=info:doi/10.48550/arxiv.2301.02693&rft_dat=%3Carxiv_GOX%3E2301_02693%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true