Sign Language Interpreter using Kinect Motion Sensor using Machine Learning
Sign language is the route through which deaf and dumb people usually communicate with one another. It is seen that, impaired people find a difficulty while interacting with the society. Normal individuals are not able to understand their sign language. To bridge this gap, the proposed system acts a...
Gespeichert in:
Veröffentlicht in: | International journal of innovative technology and exploring engineering 2019-10, Vol.8 (12), p.3151-3156 |
---|---|
Hauptverfasser: | , |
Format: | Artikel |
Sprache: | eng |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Sign language is the route through which deaf and dumb people usually communicate with one another. It is seen that, impaired people find a difficulty while interacting with the society. Normal individuals are not able to understand their sign language. To bridge this gap, the proposed system acts as the mediator between impaired and normal people. This system uses Kinect motion sensor to capture the signs. The Kinect motion sensor captures 3 dimentional dynamic gestures. Thus this study is proposed for extracting features of dynamic gestures of Indian Sign Language (ISL). As American Sign Language (ASL) is popular and is used in research and development field, on the other hand ISL has been recently standardized and hence ISL recognition is less explored. The propose method extracts feature from the sign and converts it to the intended textual form. The method then integrates local as well as global information of the signs. This integrated feature leads to improvizing the system performance , the system serves as an aid to disabled people. Its application includes hospitals, government sectors and some multinational companies. |
---|---|
ISSN: | 2278-3075 2278-3075 |
DOI: | 10.35940/ijitee.L2645.1081219 |