Convert sign language to text with CNN
Effective communication is crucial in our daily lives, and it occurs through various channels such as vocal, written, and body language. However, individuals with hearing impairments often rely on sign language as the primary means of communication. The inability to understand sign language can lead...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Tagungsbericht |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | 1 |
container_start_page | |
container_title | |
container_volume | 3075 |
creator | Mahato, Shivam Kr Jeya, R. |
description | Effective communication is crucial in our daily lives, and it occurs through various channels such as vocal, written, and body language. However, individuals with hearing impairments often rely on sign language as the primary means of communication. The inability to understand sign language can lead to isolation and barriers in communication, hindering the social lives of deaf individuals. To address this need, we propose a marker-free, visual Indian Sign Language identification system that employs image processing, computer vision, and neural network techniques. Our proposed system analyzes video footage captured by a webcam to recognize hand gestures and translate them into text, which is subsequently converted into audio. The system uses a range of image processing techniques to identify the shape of the hand from continuous video frames, including background subtraction, thresholding, and contour detection. The Haar Cascade Classifier algorithm is used to interpret the signs and assign meaning to them based on the recognized patterns. Finally, a speech synthesizer is employed to convert the displayed text into speech. The proposed system is intended to improve the social lives of deaf individuals by facilitating communication with hearing individuals. It is designed to be user-friendly, efficient, and affordable, as it does not require any additional hardware or markers to recognize signs. The proposed system could be integrated into various devices such as smartphones, tablets, or laptops, making it accessible to a wide range of users. The implementation of such a system could potentially break down communication barriers between the deaf and hearing communities, providing deaf individuals with more opportunities to interact with others and participate in society. |
doi_str_mv | 10.1063/5.0217230 |
format | Conference Proceeding |
fullrecord | <record><control><sourceid>proquest_scita</sourceid><recordid>TN_cdi_scitation_primary_10_1063_5_0217230</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>3085724145</sourcerecordid><originalsourceid>FETCH-LOGICAL-p630-6e924a5ac8c282dab432415932e6d233ba416294368a7674c672c7db7269a9883</originalsourceid><addsrcrecordid>eNotkEtLxDAURoMoWEcX_oOA4ELomNyb51KKLxjGzSzchbSNtcPY1ib18e-tzKy-zeF8cAi55GzJmcJbuWTANSA7IhmXkudacXVMMsasyEHg6yk5i3HLGFitTUaui777CmOisW06uvNdM_km0NTTFH4S_W7TOy3W63Ny8uZ3MVwcdkE2D_eb4ilfvTw-F3erfFDIchUsCC99ZSowUPtSIAguLUJQNSCWXnAFVqAyXistKqWh0nWpQVlvjcEFudprh7H_nEJMbttPYzc_OmRG6tkm5Ezd7KlYtcmntu_cMLYffvx1nLn_DE66Qwb8Azx9S1U</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>conference_proceeding</recordtype><pqid>3085724145</pqid></control><display><type>conference_proceeding</type><title>Convert sign language to text with CNN</title><source>AIP Journals Complete</source><creator>Mahato, Shivam Kr ; Jeya, R.</creator><contributor>Godfrey Winster, S ; Pushpalatha, M ; Baskar, M ; Kishore Anthuvan Sahayaraj, K</contributor><creatorcontrib>Mahato, Shivam Kr ; Jeya, R. ; Godfrey Winster, S ; Pushpalatha, M ; Baskar, M ; Kishore Anthuvan Sahayaraj, K</creatorcontrib><description>Effective communication is crucial in our daily lives, and it occurs through various channels such as vocal, written, and body language. However, individuals with hearing impairments often rely on sign language as the primary means of communication. The inability to understand sign language can lead to isolation and barriers in communication, hindering the social lives of deaf individuals. To address this need, we propose a marker-free, visual Indian Sign Language identification system that employs image processing, computer vision, and neural network techniques. Our proposed system analyzes video footage captured by a webcam to recognize hand gestures and translate them into text, which is subsequently converted into audio. The system uses a range of image processing techniques to identify the shape of the hand from continuous video frames, including background subtraction, thresholding, and contour detection. The Haar Cascade Classifier algorithm is used to interpret the signs and assign meaning to them based on the recognized patterns. Finally, a speech synthesizer is employed to convert the displayed text into speech. The proposed system is intended to improve the social lives of deaf individuals by facilitating communication with hearing individuals. It is designed to be user-friendly, efficient, and affordable, as it does not require any additional hardware or markers to recognize signs. The proposed system could be integrated into various devices such as smartphones, tablets, or laptops, making it accessible to a wide range of users. The implementation of such a system could potentially break down communication barriers between the deaf and hearing communities, providing deaf individuals with more opportunities to interact with others and participate in society.</description><identifier>ISSN: 0094-243X</identifier><identifier>EISSN: 1551-7616</identifier><identifier>DOI: 10.1063/5.0217230</identifier><identifier>CODEN: APCPCS</identifier><language>eng</language><publisher>Melville: American Institute of Physics</publisher><subject>Algorithms ; Communication ; Community participation ; Computer vision ; Deafness ; Gesture recognition ; Hearing ; Human communication ; Image processing ; Pattern recognition ; Shape ; Shape recognition ; Sign language ; Smartphones ; Speech</subject><ispartof>AIP conference proceedings, 2024, Vol.3075 (1)</ispartof><rights>Author(s)</rights><rights>2024 Author(s). Published under an exclusive license by AIP Publishing.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://pubs.aip.org/acp/article-lookup/doi/10.1063/5.0217230$$EHTML$$P50$$Gscitation$$H</linktohtml><link.rule.ids>309,310,314,780,784,789,790,794,4512,23930,23931,25140,27924,27925,76384</link.rule.ids></links><search><contributor>Godfrey Winster, S</contributor><contributor>Pushpalatha, M</contributor><contributor>Baskar, M</contributor><contributor>Kishore Anthuvan Sahayaraj, K</contributor><creatorcontrib>Mahato, Shivam Kr</creatorcontrib><creatorcontrib>Jeya, R.</creatorcontrib><title>Convert sign language to text with CNN</title><title>AIP conference proceedings</title><description>Effective communication is crucial in our daily lives, and it occurs through various channels such as vocal, written, and body language. However, individuals with hearing impairments often rely on sign language as the primary means of communication. The inability to understand sign language can lead to isolation and barriers in communication, hindering the social lives of deaf individuals. To address this need, we propose a marker-free, visual Indian Sign Language identification system that employs image processing, computer vision, and neural network techniques. Our proposed system analyzes video footage captured by a webcam to recognize hand gestures and translate them into text, which is subsequently converted into audio. The system uses a range of image processing techniques to identify the shape of the hand from continuous video frames, including background subtraction, thresholding, and contour detection. The Haar Cascade Classifier algorithm is used to interpret the signs and assign meaning to them based on the recognized patterns. Finally, a speech synthesizer is employed to convert the displayed text into speech. The proposed system is intended to improve the social lives of deaf individuals by facilitating communication with hearing individuals. It is designed to be user-friendly, efficient, and affordable, as it does not require any additional hardware or markers to recognize signs. The proposed system could be integrated into various devices such as smartphones, tablets, or laptops, making it accessible to a wide range of users. The implementation of such a system could potentially break down communication barriers between the deaf and hearing communities, providing deaf individuals with more opportunities to interact with others and participate in society.</description><subject>Algorithms</subject><subject>Communication</subject><subject>Community participation</subject><subject>Computer vision</subject><subject>Deafness</subject><subject>Gesture recognition</subject><subject>Hearing</subject><subject>Human communication</subject><subject>Image processing</subject><subject>Pattern recognition</subject><subject>Shape</subject><subject>Shape recognition</subject><subject>Sign language</subject><subject>Smartphones</subject><subject>Speech</subject><issn>0094-243X</issn><issn>1551-7616</issn><fulltext>true</fulltext><rsrctype>conference_proceeding</rsrctype><creationdate>2024</creationdate><recordtype>conference_proceeding</recordtype><recordid>eNotkEtLxDAURoMoWEcX_oOA4ELomNyb51KKLxjGzSzchbSNtcPY1ib18e-tzKy-zeF8cAi55GzJmcJbuWTANSA7IhmXkudacXVMMsasyEHg6yk5i3HLGFitTUaui777CmOisW06uvNdM_km0NTTFH4S_W7TOy3W63Ny8uZ3MVwcdkE2D_eb4ilfvTw-F3erfFDIchUsCC99ZSowUPtSIAguLUJQNSCWXnAFVqAyXistKqWh0nWpQVlvjcEFudprh7H_nEJMbttPYzc_OmRG6tkm5Ezd7KlYtcmntu_cMLYffvx1nLn_DE66Qwb8Azx9S1U</recordid><startdate>20240729</startdate><enddate>20240729</enddate><creator>Mahato, Shivam Kr</creator><creator>Jeya, R.</creator><general>American Institute of Physics</general><scope>8FD</scope><scope>H8D</scope><scope>L7M</scope></search><sort><creationdate>20240729</creationdate><title>Convert sign language to text with CNN</title><author>Mahato, Shivam Kr ; Jeya, R.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-p630-6e924a5ac8c282dab432415932e6d233ba416294368a7674c672c7db7269a9883</frbrgroupid><rsrctype>conference_proceedings</rsrctype><prefilter>conference_proceedings</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Algorithms</topic><topic>Communication</topic><topic>Community participation</topic><topic>Computer vision</topic><topic>Deafness</topic><topic>Gesture recognition</topic><topic>Hearing</topic><topic>Human communication</topic><topic>Image processing</topic><topic>Pattern recognition</topic><topic>Shape</topic><topic>Shape recognition</topic><topic>Sign language</topic><topic>Smartphones</topic><topic>Speech</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Mahato, Shivam Kr</creatorcontrib><creatorcontrib>Jeya, R.</creatorcontrib><collection>Technology Research Database</collection><collection>Aerospace Database</collection><collection>Advanced Technologies Database with Aerospace</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Mahato, Shivam Kr</au><au>Jeya, R.</au><au>Godfrey Winster, S</au><au>Pushpalatha, M</au><au>Baskar, M</au><au>Kishore Anthuvan Sahayaraj, K</au><format>book</format><genre>proceeding</genre><ristype>CONF</ristype><atitle>Convert sign language to text with CNN</atitle><btitle>AIP conference proceedings</btitle><date>2024-07-29</date><risdate>2024</risdate><volume>3075</volume><issue>1</issue><issn>0094-243X</issn><eissn>1551-7616</eissn><coden>APCPCS</coden><abstract>Effective communication is crucial in our daily lives, and it occurs through various channels such as vocal, written, and body language. However, individuals with hearing impairments often rely on sign language as the primary means of communication. The inability to understand sign language can lead to isolation and barriers in communication, hindering the social lives of deaf individuals. To address this need, we propose a marker-free, visual Indian Sign Language identification system that employs image processing, computer vision, and neural network techniques. Our proposed system analyzes video footage captured by a webcam to recognize hand gestures and translate them into text, which is subsequently converted into audio. The system uses a range of image processing techniques to identify the shape of the hand from continuous video frames, including background subtraction, thresholding, and contour detection. The Haar Cascade Classifier algorithm is used to interpret the signs and assign meaning to them based on the recognized patterns. Finally, a speech synthesizer is employed to convert the displayed text into speech. The proposed system is intended to improve the social lives of deaf individuals by facilitating communication with hearing individuals. It is designed to be user-friendly, efficient, and affordable, as it does not require any additional hardware or markers to recognize signs. The proposed system could be integrated into various devices such as smartphones, tablets, or laptops, making it accessible to a wide range of users. The implementation of such a system could potentially break down communication barriers between the deaf and hearing communities, providing deaf individuals with more opportunities to interact with others and participate in society.</abstract><cop>Melville</cop><pub>American Institute of Physics</pub><doi>10.1063/5.0217230</doi><tpages>10</tpages></addata></record> |
fulltext | fulltext |
identifier | ISSN: 0094-243X |
ispartof | AIP conference proceedings, 2024, Vol.3075 (1) |
issn | 0094-243X 1551-7616 |
language | eng |
recordid | cdi_scitation_primary_10_1063_5_0217230 |
source | AIP Journals Complete |
subjects | Algorithms Communication Community participation Computer vision Deafness Gesture recognition Hearing Human communication Image processing Pattern recognition Shape Shape recognition Sign language Smartphones Speech |
title | Convert sign language to text with CNN |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-01T02%3A32%3A53IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_scita&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=proceeding&rft.atitle=Convert%20sign%20language%20to%20text%20with%20CNN&rft.btitle=AIP%20conference%20proceedings&rft.au=Mahato,%20Shivam%20Kr&rft.date=2024-07-29&rft.volume=3075&rft.issue=1&rft.issn=0094-243X&rft.eissn=1551-7616&rft.coden=APCPCS&rft_id=info:doi/10.1063/5.0217230&rft_dat=%3Cproquest_scita%3E3085724145%3C/proquest_scita%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=3085724145&rft_id=info:pmid/&rfr_iscdi=true |