Sign language recognition using deep learning

The development of computer vision technology has made it simpler than ever to learn and use sign languages to converse with those who are deaf and mute. Exciting research is being done on a global platform for sign language communication. Deep learning and images are used to provide a mechanism for...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Nallapareddy, Sai Rahul Reddy, Viji, D.
Format: Tagungsbericht
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:The development of computer vision technology has made it simpler than ever to learn and use sign languages to converse with those who are deaf and mute. Exciting research is being done on a global platform for sign language communication. Deep learning and images are used to provide a mechanism for understanding signs in both Indian Sign Language and American Sign Language. The computer programme can anticipate the user’s A-Z alphabet signals. The Convolutional Neural Network’s training time and storage requirements are considerably reduced when RGB, GREEN, and BLUE data are converted to grayscale images by image processing. The experiment’s goal is to find the best image manipulation and deep learning architecture to execute for the system’s mobile applications. Networks are used to teach the database from the ground up Vgg16, Inception-v3, and MobileNet-v2. The report includes a comparison of predicted accuracies. The finalised Inception-v3 network improved training accuracy for American Sign Language to 98.3% and for Indian Sign Language to 99.8%, and also increased validation accuracy to 99.5% and 99.7%, respectively.
ISSN:0094-243X
1551-7616
DOI:10.1063/5.0217118